% grep "Product-Version" server/server.log Product-Version: 9.9.1
connect from the gfsh shell or is he stringing gfsh commands from his terminal prompt?!) cannot be escaped. You can do something like this though:eval '$GFSH -e "connect --user=oof --password=S*A8!j6c"'
1.1.10.RELEASE, 1.2.10.RELEASE, 1.3.4.RELEASE and 1.4.0-M3.SBDG 1.1.10.RELEASE is based on Spring Boot 2.1.17.RELEASE.1.2.10.RELEASE is based on Spring Boot 2.2.10.RELEASE.1.3.4.RELEASE is based on Spring Boot 2.3.4.RELEASE.1.4.0-M3 is based on Spring Boot 2.4.0-M3.Each release version pulls in the latest versions and bits of SDG, SSDG and STDG, respectively. spring-geode-starter-test for STDG, spring-geode-starter-session for SSDG, etc). 2.1.20.RELEASE, Moore-SR10 / 2.2.10.RELEASE, Neumann-SR4 / 2.3.4.RELEASE and Ockham-RC1 / 2020.0.0-RC1 (2.4.0-RC1).All of these releases includes Spring Data for Apache Geode & VMware Tanzu GemFire (SDG).NOTE: SDGSee the changelog for complete details.Blog Post: https://sp…Ockham/2020.0.0-x/2.4.xdrops dedicated support for VMware Tanzu GemFire now that SDG is based on Apache Geode1.13.0in this release series.That is, there is no longer a spring-data-gemfiremodule in SDG 2020.0.0-RC1/2.4.0-RC1.
Select * from /region where foo.bar = 100L
Select * from /region where bar = 100L
gfsh>start locator --name=locator1 --port=40000 --properties-file=gemfire.properties
Starting a Geode Locator in /Users/dkhopade/GemFire Support Cases/Synchrony Financial/257153/locator1...
The Locator process terminated unexpectedly with exit status 1. Please refer to the log file in /Users/dkhopade/GemFire Support Cases/Synchrony Financial/257153/locator1 for full details.
Exception in thread "main" org.apache.geode.security.AuthenticationFailedException: ExampleSecurityManager: unable to find json resource "security.json" as specified by [security-json].
at org.apache.geode.examples.security.ExampleSecurityManager.init(ExampleSecurityManager.java:137)
at org.apache.geode.internal.security.CallbackInstantiator.getSecurityManager(CallbackInstantiator.java:67)
at org.apache.geode.internal.security.SecurityServiceFactory.create(SecurityServiceFactory.java:60)
at org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:652)
at org.apache.geode.distributed.internal.InternalDistributedSystem.access$200(InternalDistributedSystem.java:135)
at org.apache.geode.distributed.internal.InternalDistributedSystem$Builder.build(InternalDistributedSystem.java:3000)
at org.apache.geode.distributed.internal.InternalDistributedSystem.connectInternal(InternalDistributedSystem.java:251)
at org.apache.geode.distributed.DistributedSystem.connect(DistributedSystem.java:158)
at org.apache.geode.distributed.internal.InternalLocator.startDistributedSystem(InternalLocator.java:700)
at org.apache.geode.distributed.internal.InternalLocator.startLocator(InternalLocator.java:374)
at org.apache.geode.distributed.LocatorLauncher.start(LocatorLauncher.java:679)
at org.apache.geode.distributed.LocatorLauncher.run(LocatorLauncher.java:587)
at org.apache.geode.distributed.LocatorLauncher.main(LocatorLauncher.java:209)
gfsh>security.json file ideally or in my case local cluster? (edited) comparable (like BigDecimal is), does the OQL engine automatically evaluate the predicate via field.equals ( predicate )? Does it even do this automatically when the Region data is not serialized?log-file-size-limit . Setting this up is working good for locators / servers and its rolling the new file after set limit, however, not working for Pulse.<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="FATAL" shutdownHook="disable" packages="org.apache.geode.internal.logging.log4j">
<Properties>
<Property name="geode-pattern">[%level{lowerCase=true} %date{yyyy/MM/dd HH:mm:ss.SSS z} <%thread> tid=%tid] %message%n%throwable%n</Property>
<Property name="geode-default">true</Property>
</Properties>
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="${geode-pattern}"/>
</Console>
<!-- EXTENSION - START -->
<RollingFile name="PulseRollingFile" fileName="${sys:pulse.Log-File-Location}" filePattern="pulse.log.%i">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1 MB"/>
</Policies>
<DefaultRolloverStrategy max="10"/>
</RollingFile>
<!-- EXTENSION - FINISH -->
</Appenders>
<Loggers>
<Logger name="com.gemstone" level="INFO" additivity="true"/>
<Logger name="org.apache.geode" level="INFO" additivity="true">
<filters>
<MarkerFilter marker="GEODE_VERBOSE" onMatch="DENY" onMismatch="NEUTRAL"/>
</filters>
</Logger>
<Logger name="org.jgroups" level="FATAL" additivity="true"/>
<Logger name="org.eclipse.jetty" level="FATAL" additivity="true"/>
<!-- EXTENSION - START -->
<Logger name="org.apache.geode.tools.pulse.internal" level="FINEST" additivity="true">
<AppenderRef ref="PulseRollingFile"/>
</Logger>
<!-- EXTENSION - FINISH -->
<Root level="INFO">
<AppenderRef ref="STDOUT"/>
</Root>
</Loggers>
</Configuration>gfsh>search lucene --name=indexName --region=/orders --queryString="Jones*" --defaultField=customer
{"4970": "4dA4F6c 4H(2*83EgD2a",
"3640": "ECd69*i( Eda477d2CCa",
"4971": "iH-9Dig f5EAE5fa83Gg",
"2306": "(d)f*9 G))CE-1D(d-5E",
Index Name: index Region: COUNTERPARTY Indexed Fields: code Field Analyzer: default Serializer: HeterogeneousLuceneSerializer Status: intiialized
get lookups, which allowed them quick and easy route calculations and ticket availability between locations. The key for them was to use the strengths of the product and NOT try and make the product work the way that they wanted it work (how often do we see a RDBMS approach applied to GF). This approach (can be equated to mechanical sympathy) allowed them to run at high scale with minimal memory overhead and footprint. (edited) IN SET ( pageKeySet ) predicate on the second query . . .getAll() is using an OQL query vs Region.getAll() is also weird behavior. (and extremely inefficient)Exception in thread "main" org.apache.geode.cache.client.NoAvailableServersException: org.apache.geode.cache.client.ServerRefusedConnectionException: 192.168.86.222(server-ln-1:58918)<v1>:41001(version:GEODE 1.12.0) refused connection: Peer or client version with ordinal 125 not supported. Highest known version is 1.12.0 Client: /192.168.86.222:51959.
[info 2020/11/18 16:23:02.881 EST cmcbldgemfgrd06-dev3-server1 <ServerConnection on port 20104 Thread 38146> tid=0xacf9] Server connection from [identity(0.0.0.0(DEV3:6480:loner):2:GFNative_8MFDj0D46X6480:DEV3,connection=1; port=54416]: connection disconnect detected by EOF.
[identity(0.0.0.0(DEV3:6480:loner):2:GFNative_8MFDj0D46X6480:DEV3,connection=1; port=54416]
High Priority Threads are handling a large number of requests, and this becomes an obvious bottleneck. At first we saw the 100 HP threads maxed-out, and the HP thread task queue also maxed-out at 80,000, We have cranked these up to higher values (last night 2,000 HP threads and 150K queue size), but even with this tuning, the queue fills-up and requests start timing-out--though far less than before.The question is: Why would GemFire use the HP thread pool to process "simple" cache read/write operations? We are not accustomed to seeing this as a runtime bottleneck . . . If we can understand the condition that triggers the use of this pool, perhaps we can figure-out the underlying cause of their inconsistent response-times (CC: @davidwisler/@tmorr/@cson).highPriorityThreads stat gets pegged at 100. Then you increased the DistributionManager.MAX_THREADS value to allow more HP threads. Is that right?For example, the region -> REGION is what we are trying to connect. There are two different types of users: dataUser and regionUser. Regions of dataUser's role is [ "*"] and the region of regionUser's role is ["REGION"]. During regular query, ResourcePermission required from the server is "DATA:READ:REGION". That would work for both users. While executing lucene query, ResourcePermission required from the server became "DATA:READ" while regionUser is only entitled with "DATA:READ:REGION". So the execution failed because of org.apache.geode.security.NotAuthorizedException.
protected int transactionId = TXManagerImpl.NOTX;
/** * Default transaction id to indicate no transaction */ public static final int NOTX = -1;
protected AbstractOp(int msgType, int msgParts) {
msg = new Message(msgParts, KnownVersion.CURRENT);
getMessage().setMessageType(msgType);
}[error 2020/12/08 20:10:16.237 EST <Event Processor for GatewaySender_AsyncEventQueue_gfrWBOrderQ> tid=0x1ab] Deserialized event not a pdx instance. Should not happen..
There is a quite a bit of overlap in capabilities (outside of serious streaming, e.g. Jet). AsyncEventQueueFactoryImpl and has been the same since Geode was first imported to Git in 2015:/** * The default batchTimeInterval for AsyncEventQueue in milliseconds. */ public static final int DEFAULT_BATCH_TIME_INTERVAL = 5;
/** * The default batch time interval in milliseconds */ int DEFAULT_BATCH_TIME_INTERVAL = 1000;
SELECT * FROM /B.I do see different threads handling the events arriving while the initial query is happening.Here is some example logging of what I see:CQEventHandler For client_B_cq: TestCqListener processing event 1: key=0; value=4 CQEventHandler For client_B_cq: TestCqListener processing event 2: key=0; value=5 Thread-5: Retrieved the following 1 initial results:[struct(key:0,value:4)] Cache Client Updater Thread on 192.168.1.15(41886)<v1>:41001 port 56761: TestCqListener processing event 3: key=0; value=6 Cache Client Updater Thread on 192.168.1.15(41886)<v1>:41001 port 56761: TestCqListener processing event 4: key=0; value=7 ...
CQEventHandler For client_B_cq: TestCqListener processing event 1: key=0; value=5 CQEventHandler For client_B_cq: TestCqListener processing event 2: key=0; value=6 Thread-5: Retrieved the following 1 initial results:[struct(key:0,value:5)] Cache Client Updater Thread on 192.168.1.15(41886)<v1>:41001 port 56761: TestCqListener processing event 3: key=0; value=7 Cache Client Updater Thread on 192.168.1.15(41886)<v1>:41001 port 56761: TestCqListener processing event 4: key=0; value=8 ...
CQEventHandler For client_B_cq: TestCqListener processing event 1: key=0; value=5 CQEventHandler For client_B_cq: TestCqListener processing event 2: key=0; value=6 CQEventHandler For client_B_cq: TestCqListener processing event 3: key=0; value=7 Thread-4: Retrieved the following 1 initial results:[struct(key:0,value:6)] Cache Client Updater Thread on 192.168.1.15(41886)<v1>:41001 port 56761: TestCqListener processing event 4: key=0; value=8 Cache Client Updater Thread on 192.168.1.15(41886)<v1>:41001 port 56761: TestCqListener processing event 5: key=0; value=9 ...
start server input arguments are passed into the builder constructor (without "--").gfsh start server --name=server-1 --server-port=0 --statistic-archive-file=cacheserver.gfs --J=-Dgemfire.log-file=cacheserver.log --J=-Dgemfire.conserve-sockets=false
java -server -classpath /path/to/lib/geode-core-1.14.0-build.0.jar:/path/to/lib/geode-dependencies.jar -Dgemfire.default.locators=10.166.144.231[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.statistic-archive-file=cacheserver.gfs -Dgemfire.log-file=cacheserver.log -Dgemfire.conserve-sockets=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 org.apache.geode.distributed.ServerLauncher start server-1 --server-port=0
setWorkingDirectory(), only the .pid file goes to that directory. The .dat file and log file stay in the directory where we run the BASH script to execute the custom launching code (that bootstraps a X509 custom certificate validator). Will we have to cd to the same dir as the "working directory" first (in our launcher script) before executing the Java proc?[info 2021/03/02 07:17:26.473 CST <PartitionedRegion Message Processor27> tid=0x198e] Exception occurred while processing DistributedPutAllOperation(EntryEventImpl[op=PUTALL_CREATE;region=/__PR/_B__ShipmentsView_51;key=null;callbac
kArg=null;originRemote=true;originMember=172.23.16.208(server-cargo-prodblue.xlsgfdcara04:12404)<v7>:1024;context=identity(172.27.52.241(32879:loner):33068:e487a7e7,connection=1;id=EventID[id=25 bytes;threadID=52000003;sequenceID=2
831]])
java.lang.NullPointerException
at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
at org.apache.geode.cache.query.cq.internal.ServerCQResultsCachePartitionRegionImpl.remove(ServerCQResultsCachePartitionRegionImpl.java:69)
at org.apache.geode.cache.query.cq.internal.ServerCQImpl.removeFromCqResultKeys(ServerCQImpl.java:297)
at org.apache.geode.internal.cache.DistributedCacheOperation.removeDestroyTokensFromCqResultKeys(DistributedCacheOperation.java:743)
at org.apache.geode.internal.cache.DistributedCacheOperation._distribute(DistributedCacheOperation.java:693)
at org.apache.geode.internal.cache.DistributedCacheOperation.startOperation(DistributedCacheOperation.java:277)
at org.apache.geode.internal.cache.DistributedRegion.postPutAllSend(DistributedRegion.java:3304)
at org.apache.geode.internal.cache.LocalRegionDataView.postPutAll(LocalRegionDataView.java:358)
at org.apache.geode.internal.cache.partitioned.PutAllPRMessage.doPostPutAll(PutAllPRMessage.java:568)
at org.apache.geode.internal.cache.partitioned.PutAllPRMessage.doLocalPutAll(PutAllPRMessage.java:507)
at org.apache.geode.internal.cache.partitioned.PutAllPRMessage.operateOnPartitionedRegion(PutAllPRMessage.java:326)
at org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:333)
at org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
at org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:440)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:442)
at org.apache.geode.distributed.internal.ClusterOperationExecutors.doPartitionRegionThread(ClusterOperationExecutors.java:422)
at org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
at java.lang.Thread.run(Thread.java:748)We are working on upgrading to 9.10.5 and ran into an issue…
One of the two servers hangs during shutdown due to the following thread not being a daemon thread (the executor does not get stopped by anything else either):
"ColocationLogger for UDT_Data" #124 prio=5 os_prio=0 tid=0x00007f34ecb5f000 nid=0x184164 waiting on condition [0x00007f3452ee2000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000e74b8b10> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) . . .alter region doesn’t allow you to specify --member option. So cluster configuration service has to be disabled on the cluster in order for this command to work.data class Vehicle( var vin: String = "", var odometer: Long = 0, var speed : Int = 0, var temperature: Int = 0, var gpsLocation: GpsLocation? = null )
package com.vmware.tanzu.data.IoT.vehicles.domains
data class Vehicle(
var vin: String = "",
var odometer: Long = 0,
var speed : Int = 0,
var temperature: Int = 0,
var gpsLocation: GpsLocation? = null
) {
data class GpsLocation(
var lat: Long = 0,
var lon: Long = 0,
)
}com/vmware/tanzu/data/IoT/vehicles/domains/Vehicle.class com/vmware/tanzu/data/IoT/vehicles/domains/Vehicle$GpsLocation.class
<pdx read-serialized="false">
<pdx-serializer>
<class-name>org.apache.geode.pdx.ReflectionBasedAutoSerializer</class-name>
<parameter name="classes">
<string>com.vmware.tanzu.data.IoT.vehicles.domains.*</string>
</parameter>
</pdx-serializer>
</pdx>[info 2021/03/11 14:57:12.118 PST client <main> tid=0x1] Caching PdxType[dsid=0, typenum=173529
name=com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle$GpsLocation
fields=[
lat:long:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=0
lon:long:1:idx0(relativeOffset)=8:idx1(vlfOffsetIndex)=0]]
[info 2021/03/11 14:57:12.121 PST client <main> tid=0x1] Caching PdxType[dsid=0, typenum=7044482
name=com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle
fields=[
odometer:long:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=0
speed:int:1:idx0(relativeOffset)=8:idx1(vlfOffsetIndex)=0
temperature:int:2:idx0(relativeOffset)=12:idx1(vlfOffsetIndex)=0
vin:String:3:idx0(relativeOffset)=16:idx1(vlfOffsetIndex)=-1
gpsLocation:Object:4:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation
GPSLocation == GPSLocationKTpublic final class com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle {
public final java.lang.String getVin();
public final void setVin(java.lang.String);
public final long getOdometer();
public final void setOdometer(long);
public final int getSpeed();
public final void setSpeed(int);
public final int getTemperature();
public final void setTemperature(int);
public final com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation getGpsLocation();
public final void setGpsLocation(com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation);
public com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle(java.lang.String, long, int, int, com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation);
public com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle(java.lang.String, long, int, int, com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation, int, kotlin.jvm.internal.DefaultConstructorMarker);
public com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle();
public final java.lang.String component1();
public final long component2();
public final int component3();
public final int component4();
public final com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation component5();
public final com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle copy(java.lang.String, long, int, int, com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation);
public static com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle copy$default(com.vmware.tanzu.data.IoT.vehicles.domains.Vehicle, java.lang.String, long, int, int, com.vmware.tanzu.data.IoT.vehicles.domains.GpsLocation, int, java.lang.Object);
public java.lang.String toString();
public int hashCode();
public boolean equals(java.lang.Object);
}[root@ip-10-120-28-222 ~]# curl internal-ALB-Gemfire-1149703819.ap-northeast-1.elb.amazonaws.com:8080/geode/v1/M2_Event_MBB
{
"M2_Event_MBB" : [ {
"event_id" : "M00454",
"gift_id" : null,
"condition_1" : null,
"card_remark" : "",
"card_notice" : "",
"event_name" : "中國信託優惠活動"
}, {
"event_id" : "M00465",
"gift_id" : null,
"condition_1" : null,
"card_remark" : "",
"card_notice" : "",
"event_name" : "QQA會員獨享+FreePrint"
}, {
...
[root@ip-10-120-28-222 ~]# curl -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' \
> -d '{"@type":"string", "@value":"M00465"}' \
> "internal-ALB-Gemfire-1149703819.ap-northeast-1.elb.amazonaws.com:8080/geode/v1/queries/getM2EventMBB"
[ {
"event_id" : "M00465",
"gift_id" : null,
"condition_1" : null,
"card_remark" : "",
"card_notice" : "",
"event_name" : "QQA\u6703\u54E1\u7368\u4EAB+FreePrint"
} ] public static JsonGenerator enableDisableJSONGeneratorFeature(JsonGenerator generator) {
generator.enable(JsonWriteFeature.ESCAPE_NON_ASCII.mappedFeature()); // remove this line
generator.disable(Feature.AUTO_CLOSE_TARGET);
generator.setPrettyPrinter(new DefaultPrettyPrinter());
return generator;
}generator.enable(JsonWriteFeature.ESCAPE_NON_ASCII.mappedFeature());
import data with --invoke-callbacks=true also force WAN replication for the imported data?[warn 2021/03/29 07:01:21.058 EDT <Event Processor for GatewaySender_AsyncEventQueue_VehicleAggregationQueue_2> tid=0x50] org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher@546cd163: Exception during processing batch 0 org.apache.geode.internal.cache.wan.GatewaySenderException: org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher@546cd163: Exception during processing batch 0, caused by java.lang.IllegalStateException: Unknown pdx type=4907082 at org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher.dispatchBatch(GatewaySenderEventCallbackDispatcher.java:160) at org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher.dispatchBatch(GatewaySenderEventCallbackDispatcher.java:78) at org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.processQueue(AbstractGatewaySenderEventProcessor.java:639) at org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:1112) Caused by: java.lang.IllegalStateException: Unknown pdx type=4907082 at org.apache.geode.internal.InternalDataSerializer.readPdxSerializable(InternalDataSerializer.java:2857) at org.apache.geode.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2632) at org.apache.geode.DataSerializer.readObject(DataSerializer.java:2864) at org.apache.geode.internal.util.BlobHelper.deserializeBlob(BlobHelper.java:90) at org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:2039) at org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:2032) at org.apache.geode.internal.cache.wan.GatewaySenderEventImpl.getDeserializedValue(GatewaySenderEventImpl.java:566) at com.vmware.tanzu.data.IoT.vehicles.geode.VehicleAggregationAsyncListener.processEvents(VehicleAggregationAsyncListener.java:56) at org.apache.geode.internal.cache.wan.GatewaySenderEventCallbackDispatcher.dispatchBatch(GatewaySenderEventCallbackDispatcher.java:150) ... 3 more
2 years agoCaused by: java.security.cert.CertificateException: No subject alternative DNS name matching localhost found.
2 years ago[info 2021/03/26 17:20:07.268 UTC locator-d2fe07f4-2445-4c79-a734-861f7e99161f <main> tid=0x1]
Initialization of region _ConfigurationRegion completed
[info 2021/03/26 17:20:07.291 UTC locator-d2fe07f4-2445-4c79-a734-861f7e99161f <main> tid=0x1]
Stopping Locator on 0.0.0.0/0.0.0.0:55221:55221
[info 2021/03/26 17:20:07.306 UTC locator-d2fe07f4-2445-4c79-a734-861f7e99161f <locator request
thread 1> tid=0xe] Exception in processing request from 127.0.0.1
javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2020)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1127)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at org.apache.geode.internal.net.SocketCreator.handshakeIfSocketIsSSL(SocketCreator.jav
a:895)
at org.apache.geode.distributed.internal.tcpserver.TcpServer.lambda$processRequest$0(Tc
pServer.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[fatal 2021/03/26 17:20:07.307 UTC locator-d2fe07f4-2445-4c79-a734-861f7e99161f <main> tid=0x1]
Problem forming SSL connection to localhost/127.0.0.1[55221].
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject altern
ative DNS name matching localhost found.
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:965)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at org.apache.geode.internal.net.SocketCreator.configureClientSSLSocket(SocketCreator.j
ava:1000)
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:702)
at org.apache.geode.distributed.internal.tcpserver.TcpSocketCreatorImpl.connect(TcpSock
etCreatorImpl.java:165)
at org.apache.geode.distributed.internal.tcpserver.TcpClient.getServerVersion(TcpClient
.java:268)
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.
java:164)
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.
java:147)
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.
java:126)
at org.apache.geode.distributed.internal.tcpserver.TcpClient.stop(TcpClient.java:82)
at org.apache.geode.distributed.internal.membership.gms.locator.MembershipLocatorImpl.s
top(MembershipLocatorImpl.java:179)
at org.apache.geode.distributed.internal.InternalLocator.stop(InternalLocator.java:956)
at org.apache.geode.distributed.internal.InternalLocator.stop(InternalLocator.java:903)
at org.apache.geode.distributed.internal.InternalLocator.startLocator(InternalLocator.j
ava:388)
at org.apache.geode.distributed.LocatorLauncher.start(LocatorLauncher.java:714)
at org.apache.geode.distributed.LocatorLauncher.run(LocatorLauncher.java:621)
at org.apache.geode.distributed.LocatorLauncher.main(LocatorLauncher.java:215)
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching lo
calhost found.
at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:214)
at sun.security.util.HostnameChecker.match(HostnameChecker.java:96)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:462)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:442)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:209)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:1
32)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1621)
... 22 more
2 years agoGemFire 9.9.5 (PCC_1.10) to Geode 1.14.0 (PCC_1.14) which utilizes a WAN configuration. One cluster upgrades fine but the other fails with this error. Thanks.[fatal 2021/04/07 22:49:33.431 UTC cacheserver-a01c1b06-249b-449f-8443-fb7e3c4a5d06 <P2P message reader for 5a651722-3af0-4ed3-9719-6fd521ea0ef3.locator-server.maastrichtblue-services-peer-subnet.service-instance-10b32dbd-3215-437d-8568-6931c7f2372d.bosh(locator-5a651722-3af0-4ed3-9719-6fd521ea0ef3:6:locator)<ec><v31>:56152(version:GEODE 1.10.0) shared unordered sender uid=43 local port=40405 remote port=36772> tid=0x34] Error deserializing message
java.io.IOException: Could not create an instance of org.apache.geode.internal.cache.FunctionStreamingReplyMessage .
at org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.invokeFromData(DSFIDSerializerImpl.java:330)
at org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.create(DSFIDSerializerImpl.java:368)
at org.apache.geode.internal.DSFIDFactory.create(DSFIDFactory.java:1031)
at org.apache.geode.internal.InternalDataSerializer.readDSFID(InternalDataSerializer.java:2391)
at org.apache.geode.internal.InternalDataSerializer.readDSFID(InternalDataSerializer.java:2403)
at org.apache.geode.internal.tcp.Connection.readMessage(Connection.java:2979)
at org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2797)
at org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1651)
at org.apache.geode.internal.tcp.Connection.run(Connection.java:1482)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.NotSerializableException
at org.apache.geode.internal.cache.FunctionStreamingReplyMessage.fromData(FunctionStreamingReplyMessage.java:97)
at org.apache.geode.internal.serialization.internal.DSFIDSerializerImpl.invokeFromData(DSFIDSerializerImpl.java:317)
... 11 more
Caused by: java.io.IOException: Unknown header byte 0
at org.apache.geode.internal.serialization.DscodeHelper.toDSCODE(DscodeHelper.java:40)
at org.apache.geode.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2496)
at org.apache.geode.DataSerializer.readObject(DataSerializer.java:2864)
at org.apache.geode.DataSerializer.readHashMap(DataSerializer.java:2261)
at org.apache.geode.management.internal.configuration.messages.ConfigurationResponse.fromData(ConfigurationResponse.java:69)
at org.apache.geode.internal.DSFIDFactory.readConfigurationResponse(DSFIDFactory.java:1091)
at org.apache.geode.internal.DSFIDFactory.create(DSFIDFactory.java:1027)
at org.apache.geode.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2510)
at org.apache.geode.DataSerializer.readObject(DataSerializer.java:2864)
at org.apache.geode.internal.cache.FunctionStreamingReplyMessage.fromData(FunctionStreamingReplyMessage.java:93)
... 12 more
2 years agoJVM command line arguments --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -XX:NewSize=482m -XX:MaxNewSize=482m -Xms4827m -Xmx4827m -Dp2p.HANDSHAKE_POOL_SIZE=40 -DBridgeServer.HANDSHAKE_POOL_SIZE=40 -DgemfirePropertyFile=/var/vcap/jobs/gemfire-server/config/gemfire.properties -DgemfireSecurityPropertyFile=/var/vcap/jobs/gemfire-server/config/gfsecurity.properties -Dgemfire.locators=2df66910-127e-4011-8638-051f995e6e30.locator-server.vividgamboge-services-subnet.service-instance-07880386-99c7-461f-84d2-bdccc7cc2095.bosh[55221],1d5b9926-f8a7-4070-98c7-a378aa1fc4d6.locator-server.vividgamboge-services-subnet.service-instance-07880386-99c7-461f-84d2-bdccc7cc2095.bosh[55221],6840943e-c791-476e-85c9-b142b5b3892c.locator-server.vividgamboge-services-subnet.service-instance-07880386-99c7-461f-84d2-bdccc7cc2095.bosh[55221] -Dgemfire.forceDnsUse=true -Djdk.tls.trustNameService=true -Dgemfire.use-cluster-configuration=true -Dgemfire.http-service-port=7070 -Dgemfire.start-dev-rest-api=false -Xloggc:/var/vcap/sys/log/gemfire-server/gemfire/server_gc.log -Dgemfire.OSProcess.ENABLE_OUTPUT_REDIRECTION=true -XX:CMSInitiatingOccupancyFraction=60 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=20 -XX:GCLogFileSize=1M -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=32768 -XX:OnOutOfMemoryError='kill -9 %p' -XX:+UseNUMA -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+DisableExplicitGC -Dgemfire.security-manager= -Dgemfire.security-enabled-components=server,http,jmx,gateway -Dgemfire.launcher.registerSignalHandlers=true -Dcloudcache-metrics-endpoint-port=7575 -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 -Dgpfdist-hostname=2df66910-127e-4011-8638-051f995e6e30.locator-server.vividgamboge-services-subnet.service-instance-07880386-99c7-461f-84d2-bdccc7cc2095.bosh -Dp2p.backlog=1024 -DDistributionManager.MAX_FE_THREADS=2048 GemFire properties defined using the API name : cacheserver-2df66910-127e-4011-8638-051f995e6e30 GemFire properties defined with the property file ack-severe-alert-threshold : 0 ack-wait-threshold : 15 archive-disk-space-limit : 150 archive-file-size-limit : 10 bind-address : 2df66910-127e-4011-8638-051f995e6e30.locator-server.vividgamboge-services-subnet.service-instance-07880386-99c7-461f-84d2-bdccc7cc2095.bosh conserve-sockets : false delta-propagation : true deploy-working-dir : /var/vcap/store/gemfire-server disable-auto-reconnect : false disable-tcp : false distributed-system-id : 42 enable-network-partition-detection : true enable-time-statistics : true groups : cacheserver-2df66910-127e-4011-8638-051f995e6e30 http-service-bind-address : 2df66910-127e-4011-8638-051f995e6e30.locator-server.vividgamboge-services-subnet.service-instance-07880386-99c7-461f-84d2-bdccc7cc2095.bosh jmx-manager : true jmx-manager-port : 1088 jmx-manager-start : true jmx-manager-update-rate : 2000 locator-wait-time : 120 lock-memory : false log-disk-space-limit : 100 log-file : /var/vcap/sys/log/gemfire-server/gemfire/server.log log-file-size-limit : 10 log-level : config max-wait-time-reconnect : 60000 mcast-port : 0 member-timeout : 8000 membership-port-range : 56152-65535 memcached-port : 0 memcached-protocol : ASCII redundancy-zone : us-central1-b remove-unresponsive-client : true security-client-auth-init : ******** security-log-level : ******** security-peer-verifymember-timeout : ******** socket-buffer-size : 32768 socket-lease-time : 60000 ssl-enabled-components : web ssl-endpoint-identification-enabled : false ssl-keystore : /var/vcap/jobs/gemfire-server/config/keystore.jks ssl-keystore-password : ******** ssl-require-authentication : false ssl-truststore : /var/vcap/jobs/gemfire-server/config/truststore.jks ssl-truststore-password : ******** ssl-web-require-authentication : false statistic-archive-file : /var/vcap/sys/log/gemfire-server/gemfire/statistics.gfs statistic-sample-rate : 1000 statistic-sampling-enabled : true tcp-port : 40405 Cache attributes is-server : true Cache-server attributes max-connections : 5000 tcp-no-delay : true
select * from region where name='foo'
unknown file: error: C++ exception with description "boost::filesystem::remove: The process cannot access the file because it is being used by another process: "HARegionCacheListenerKeyValueTest\locator\0\ConfigDiskDir_HARegionCacheListenerKeyValueTest_locator_0\BACKUPcluster_config.if"" thrown in SetUpTestSuite().
BACKUPCluster_config.if is, and has any work been done in that area of the locator recently? There's some kind of strange sharing violation occurring here - the locator process has definitely exited, and at least some of the standard command-line tools (I used rmdir /s /q on the locator dir) are able to delete the file, but boost::filesystem, at least, gets a sharing error and throws an exception. This is Geode 1.15.0-build.27.[error 2021/06/16 12:37:05.329 PDT server3 <main> tid=0x1] java.lang.IllegalStateException: For partition region /data,total-num-buckets 1130 should not be changed. Previous configured number is 113.Exception in thread "main" java.lang.IllegalStateException: For partition region /data,total-num-buckets 1130 should not be changed. Previous configured number is 113. at org.apache.geode.internal.cache.PartitionedRegion.createAndValidatePersistentConfig(PartitionedRegion.java:955) at org.apache.geode.internal.cache.PartitionedRegion.initPRInternals(PartitionedRegion.java:1013) at org.apache.geode.internal.cache.PartitionedRegion.initialize(PartitionedRegion.java:1193)
2 years agov1.13.2 would suddenly start throwing the following errors:[warn 2021/06/16 20:41:06.882 UTC cacheserver-931ec148-4e8d-408f-bedd-6a831069e47c <Handshaker 931ec148-4e8d-408f-bedd-6a831069e47c.locator-server.roastcoffee-services-subnet.service-instance-356c84ec-4721-40c2-bab3-17e07766e1fd.bosh/10.0.8.5:40404 Thread 1> tid=0x58] Error processing client connection java.lang.IllegalArgumentException: unknown communications mode: 0 at org.apache.geode.internal.cache.tier.CommunicationMode.fromModeNumber(CommunicationMode.java:164) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.getCommunicationModeForNonSelector(AcceptorImpl.java:1563) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.handleNewClientConnection(AcceptorImpl.java:1430) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$handOffNewClientConnection$4(AcceptorImpl.java:1341) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [warn 2021/06/16 20:41:06.883 UTC cacheserver-931ec148-4e8d-408f-bedd-6a831069e47c <Handshaker 931ec148-4e8d-408f-bedd-6a831069e47c.locator-server.roastcoffee-services-subnet.service-instance-356c84ec-4721-40c2-bab3-17e07766e1fd.bosh/10.0.8.5:40404 Thread 1> tid=0x58] Cache server: failed accepting client connection java.io.EOFException java.io.EOFException at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.handleNewClientConnection(AcceptorImpl.java:1438) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$handOffNewClientConnection$4(AcceptorImpl.java:1341) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
2 years ago
2 years agoparse error: Invalid numeric literal at line 2, column 0
INFO: Failed to get partitioned regions from API Server running on locator. continuing to next server.
looking for server at: 'a020b8e3-891a-49f0-954c-5af281f871c6.locator.weeblyblue-services-peer-subnet.service-instance-33d7efe5-382f…
Visualize and Analyze Apache Geode Real-time and Historical Metrics -> https://tanzu.vmware.com/content/slides/visualize-and-analyze-apache-geode-real-time-and-historical-metrics Four Real World Use Cases For An In-Memory Data Grid -> https://tanzu.vmware.com/content/blog/four-real-world-use-cases-for-an-in-memory-data-grid
recovery-delay seconds./services/dataTx/dataTxgeodeperftest/components/runners/PutStringInRegionRunner.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'region' defined in class path resource [com/vmware/pivotal/labs/services/dataTx/dataTxgeodeperftest/AppGeodeConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.geode.cache.Region]: Factory method 'region' threw exception; nested exception is org.apache.geode.GemFireConfigException: Error configuring GemFire ssl at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:800) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:229) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1354) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1204) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:564) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:944) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.9.jar:5.3.9] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.9.jar:5.3.9] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.4.jar:2.5.4] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.4.jar:2.5.4] at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.4.jar:2.5.4] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.4.jar:2.5.4] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.4.jar:2.5.4] at com.vmware.pivotal.labs.services.dataTx.dataTxgeodeperftest.DataTxGeodePerfApplicationKt.main(DataTxGeodePerfApplication.kt:13) ~[main/:na] Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'putStringInRegionRunner' defined in file [/Users/Projects/Pivotal/dataTx/IMDG/geode/extensions/dataTx-geode-extensions-core/applications/geode-perf-test/build/classes/kotlin/main/com/vmware/pivotal/labs/services/dataTx/dataTxgeodeperftest/components/runners/PutStringInRegionRunner.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'region' defined in class path resource [com/vmware/pivotal/labs/services/dataTx/dataTxgeodeperftest/AppGeodeConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.geode.cache.Region]: Factory method 'region' threw exception; nested exception is org.apache.geode.GemFireConfigException: Error configuring GemFire ssl at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:800) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:229) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1354) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1204) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:564) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1380) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1300) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) ~[spring-beans-5.3.9.jar:5.3.9] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) ~[spring-beans-5.3.9.jar:5.3.9] ... 18 common frames omitted Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'region' defined in class path resource [ ....... ... 47 common frames omitted Caused by: java.security.UnrecoverableKeyException: Password must not be null at java.base/sun.security.provider.JavaKeyStore.engineGetKey(JavaKeyStore.java:135) ~[na:na] at java.base/sun.security.util.KeyStoreDelegator.engineGetKey(KeyStoreDelegator.java:90) ~[na:na] at java.base/java.security.KeyStore.getKey(KeyStore.java:1057) ~[na:na] at java.base/sun.security.ssl.SunX509KeyManagerImpl.<init>(SunX509KeyManagerImpl.java:145) ~[na:na] at java.base/sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:70) ~[na:na] at java.base/javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:271) ~[na:na] at org.apache.geode.internal.net.SocketCreator.getKeyManagers(SocketCreator.java:407) ~[geode-core-1.13.1.jar:na] at org.apache.geode.internal.net.SocketCreator.createAndConfigureSSLContext(SocketCreator.java:277) ~[geode-core-1.13.1.jar:na] at org.apache.geode.internal.net.SocketCreator.initialize(SocketCreator.java:231) ~[geode-core-1.13.1.jar:na] ... 74 common frames omitted
)
LOL Just kidding.gfsh>remove --region=/region1 --key=('id': '133abg134') --key-class=data.ProfileKeypackage com.vanguard.tip.epm.entity.reference.investmentlist;
import com.vanguard.tip.epm.id.BaseHashKey;
import com.vanguard.tip.epm.id.PortfolioRelated;
import com.vanguard.tip.epm.model.Date;
remove --region=/InvestmentList --key=("portfolioId":"KT32","holdTypeId":"MAR","holdType":"COUNTRY_OF_RISK","side":"BUY","controlType":"RESTRICT","effectiveFromDate":"2019-07-19","effectiveToDate":"2999-12-31") --key-class=com.vanguard.tip.epm.entity.reference.investmentlist.InvestmentListKey
Trying to run this query to remove this entry. I get
Result : false
Message : Key is not present in the region
Key Class : com.vanguard.tip.epm.entity.reference.investmentlist.InvestmentListKey
Key : {"portfolioId":"KT32","holdTypeId":"MAR","holdType":"COUNTRY_OF_RISK","side":"BUY","controlType":"RESTRICT","effectiveFromDate":"2019-07-19","effectiveToDate":"2999-12-31"}
back form Gfsh.[debug 2021/09/29 18:50:09.769 BST <P2P message reader for 169.117.93.151(rd1-server:7485)<v2>:41001 unshared ordered uid=17 dom #1 port=48234> tid=0x58] putAll processing (GemFire:type=Member,member=rd1-server,org.apache.geode.internal.cache.FutureCachedDeserializable@5722a66a,null) with null sender=169.117.93.151(rd1-server:7485)<v2>:41001 [debug 2021/09/29 18:50:09.769 BST <P2P message reader for 169.117.93.151(rd1-server:7485)<v2>:41001 unshared ordered uid=17 dom #1 port=48234> tid=0x58] Processing DistributedPutAllOperation$PutAllMessage(region path='/_monitoringRegion_169.117.93.151<v2>41001'; sender=169.117.93.151(rd1-server:7485)<v2>:41001; callbackArg=null; processorId=0; op=PUTALL_CREATE; applied=true; directAck=false; posdup=false; hasDelta=false; hasOldValue=false; lastModified=1632937809762; eventId=EventID[169.117.93.151(rd1-server)<v2>:41001;threadID=1;sequenceID=51594]; entries=2; entry values=[(GemFire:service=CacheServer,port=16771,type=Member,member=rd1-server,VMCachedDeserializable@1146430504,null), (GemFire:type=Member,member=rd1-server,VMCachedDeserializable@473620234,null)]) [debug 2021/09/29 18:50:10.009 BST <P2P message reader for 169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000 unshared ordered uid=15 dom #1 port=48238> tid=0x5a] Received message 'DistributedPutAllOperation$PutAllMessage(region path='/_monitoringRegion_169.117.93.151<v1>41000'; sender=169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000; callbackArg=null; processorId=0; op=PUTALL_CREATE; applied=false; directAck=false; posdup=false; hasDelta=false; hasOldValue=false; lastModified=1632937810009; eventId=EventID[169.117.93.151(rd1-locator:locator)<ec><v1>:41000;threadID=1;sequenceID=5858]; entries=1; entry values=[(GemFire:type=Member,member=rd1-locator,org.apache.geode.internal.cache.FutureCachedDeserializable@6ff0ee1,null)])' from <169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000> [debug 2021/09/29 18:50:10.009 BST <P2P message reader for 169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000 unshared ordered uid=15 dom #1 port=48238> tid=0x5a] recording bulkOp start for ThreadId[169.117.93.151(rd1-locator:locator)<ec><v1>:41000; thread 1] [debug 2021/09/29 18:50:10.009 BST <P2P message reader for 169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000 unshared ordered uid=15 dom #1 port=48238> tid=0x5a] putAll processing (GemFire:type=Member,member=rd1-locator,org.apache.geode.internal.cache.FutureCachedDeserializable@6ff0ee1,null) with null sender=169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000 [debug 2021/09/29 18:50:10.009 BST <P2P message reader for 169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000 unshared ordered uid=15 dom #1 port=48238> tid=0x5a] Processing DistributedPutAllOperation$PutAllMessage(region path='/_monitoringRegion_169.117.93.151<v1>41000'; sender=169.117.93.151(rd1-locator:2054:locator)<ec><v1>:41000; callbackArg=null; processorId=0; op=PUTALL_CREATE; applied=false; directAck=false; posdup=false; hasDelta=false; hasOldValue=false; lastModified=1632937810009; eventId=EventID[169.117.93.151(rd1-locator:locator)<ec><v1>:41000;threadID=1;sequenceID=5858]; entries=1; entry values=[(GemFire:type=Member,member=rd1-locator,VMCachedDeserializable@1798102003,null)])
[debug 2021/09/29 18:50:50.692 BST <poolTimer-cdt-rdc-clientpool-4> tid=0x2e] Received query response from locator LocatorAddress [socketInetAddress=iaase00000209.svr.emea.jpmchase.net/169.117.93.151:16770, hostname=iaase00000209.svr.emea.jpmchase.net, isIpString=false]: LocatorListResponse{locators=[iaase00001092.svr.emea.jpmchase.net:16770, iaase00000209.svr.emea.jpmchase.net:16770],isBalanced=false}
[debug 2021/09/29 18:50:59.710 BST <main> tid=0x1] Destroying failed connection to iaase00000209.svr.emea.jpmchase.net:16771
[warn 2021/09/29 18:50:59.710 BST <main> tid=0x1] Could not connect to: iaase00000209.svr.emea.jpmchase.net:16771
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:224)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.geode.cache.client.internal.ClientSideHandshakeImpl.handshakeWithServer(ClientSideHandshakeImpl.java:199)
at org.apache.geode.cache.client.internal.ConnectionImpl.connect(ConnectionImpl.java:118)
at org.apache.geode.cache.client.internal.ConnectionConnector.connectClientToServer(ConnectionConnector.java:75)
at org.apache.geode.cache.client.internal.ConnectionFactoryImpl.createClientToServerConnection(ConnectionFactoryImpl.java:111)
at org.apache.geode.cache.client.internal.QueueManagerImpl.initializeConnections(QueueManagerImpl.java:452)
at org.apache.geode.cache.client.internal.QueueManagerImpl.start(QueueManagerImpl.java:290)
at org.apache.geode.cache.client.internal.PoolImpl.start(PoolImpl.java:337)
at org.apache.geode.cache.client.internal.PoolImpl.finishCreate(PoolImpl.java:176)
at org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:162)
at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:372)
at org.springframework.data.gemfire.client.PoolFactoryBean.create(PoolFactoryBean.java:379)
at org.springframework.data.gemfire.client.PoolFactoryBean.lambda$getObject$7(PoolFactoryBean.java:251)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.gemfire.client.PoolFactoryBean.getObject(PoolFactoryBean.java:244)
at org.springframework.data.gemfire.client.PoolFactoryBean.getObject(PoolFactoryBean.java:75)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:171)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:101)
at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1821)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getObjectForBeanInstance(AbstractAutowireCapableBeanFactory.java:1266)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1306)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1226)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:640)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:130)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:399)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1422)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:895)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
at com.jpm.referencedata.poc.client.AccountServiceIT.main(AccountServiceIT.java:21)
[debug 2021/09/29 18:50:59.711 BST <main> tid=0x1] SubscriptionManager - Intial primary creation failed. Trying to create a new primaryqueue-size, GFSH reference says the default is Zero, but XML reference shows no default value. I wouldn't expect the default to be Zero as this leads to pauses when the OS flushes (sync's to disk) the underlying RandomAccessFile . . . (edited) off-heap=true)? Need to confirm this behavior to understand how/why Citi is running into memory problems when attempting to import data while recovering their system. They 100+ GB off-heap configured, and ALL Regions use this. Only 24 GB allocated for the Java Heap, which is enough for typical operations, but perhaps not for system recovery ops.ServerConnection on port 54972 Thread 1: OffHeapStoredObjectWithHeapForm.<init> heapForm=[B@6c910a3c; heapFormLength=10279 ServerConnection on port 54972 Thread 1: GatewaySenderEventImpl.initializeValue region=/data; key=0; initialStoredObject=org.apache.geode.internal.offheap.OffHeapStoredObjectWithHeapForm@25497001:<dataSize=10279 refCount=4 addr=125497000>
ServerConnection on port 54972 Thread 1: OffHeapStoredObject.<init> memoryAddress=4920537088 ServerConnection on port 54972 Thread 1: GatewaySenderEventImpl.initializeValue region=/data; key=0; createdStoredObject=org.apache.geode.internal.offheap.OffHeapStoredObject@25497001:<dataSize=10279 refCount=4 addr=125497000>
ServerConnection on port 54972 Thread 1: GatewaySenderEventImpl.initializeValue region=/data; key=0; value=org.apache.geode.internal.offheap.OffHeapStoredObject@25497001:<dataSize=10279 refCount=4 addr=125497000>
geode_function_executions jvm_memory jvm_threads jvm_gc system_cpu
The Cache Server process terminated unexpectedly with exit status 1. Please refer to the log file in /Users/devtools/repositories/IMDG/geode/apache-geode-1.13.1/bin/server1 for full details.
Exception in thread "main" org.apache.geode.GemFireConfigException: Unable to join the distributed system. Operation either timed out, was stopped or Locator does not exist.
at org.apache.geode.distributed.internal.DistributionImpl.start(DistributionImpl.java:184)
at org.apache.geode.distributed.internal.DistributionImpl.createDistribution(DistributionImpl.java:222)
at org.apache.geode.distributed.internal.ClusterDistributionManager.<init>(ClusterDistributionManager.java:464)
at org.apache.geode.distributed.internal.ClusterDistributionManager.<init>(ClusterDistributionManager.java:497)
at org.apache.geode.distributed.internal.ClusterDistributionManager.create(ClusterDistributionManager.java:326)
at org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:779)
at org.apache.geode.distributed.internal.InternalDistributedSystem.access$200(InternalDistributedSystem.java:135)
at org.apache.geode.distributed.internal.InternalDistributedSystem$Builder.build(InternalDistributedSystem.java:3033)
at org.apache.geode.distributed.internal.InternalDistributedSystem.connectInternal(InternalDistributedSystem.java:290)
at org.apache.geode.distributed.internal.InternalDistributedSystem.connectInternal(InternalDistributedSystem.java:216)
at org.apache.geode.internal.cache.InternalCacheBuilder.createInternalDistributedSystem(InternalCacheBuilder.java:346)
at java.base/java.util.Optional.orElseGet(Optional.java:369)
at org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:157)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:142)
at org.apache.geode.distributed.internal.DefaultServerLauncherCacheProvider.createCache(DefaultServerLauncherCacheProvider.java:52)
at org.apache.geode.distributed.ServerLauncher.createCache(ServerLauncher.java:892)
at org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:807)
at org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:737)
at org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:256)ClientCache application with a LOCAL-only Region, doing a simple put(..) then get(..) working on GraalVM for Java 17 (OpenJDK17).Look for the source code of this app in a comment (I have not posted my project to GitHub yet, but will).FYR, when you run the simple, pure Geode app with a standard JVM (e.g. HotSpot VM), this is the timing...$ time java -cp ... io.examples.app.geode.ApacheGeodeApplication > /dev/null... real 0m1.356s user 0m1.619s sys 0m0.234s
$ time geodeapp > /dev/null... real 0m0.389s user 0m0.024s sys 0m0.021s
(edited) package io.examples.app.geode;
import java.io.Serializable;
import java.util.Properties;
import org.apache.geode.cache.GemFireCache;
import org.apache.geode.cache.Region;
import org.apache.geode.cache.client.ClientCache;
import org.apache.geode.cache.client.ClientCacheFactory;
import org.apache.geode.cache.client.ClientRegionShortcut;
import org.apache.geode.distributed.ConfigurationProperties;
import io.examples.app.geode.util.Assertions;
import lombok.AccessLevel;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
/**
* The {@link ApacheGeodeApplication} class is a Java program that bootstraps (configures & initializes) Apache Geode,
* stores then retrieves data in a Geode cache, and prints the result.
*
* @author John Blum
* @see java.lang.Runnable
* @see java.io.Serializable
* @see java.util.Properties
* @see org.apache.geode.cache.GemFireCache
* @see org.apache.geode.cache.client.ClientCache
* @see org.apache.geode.cache.client.ClientCacheFactory
* @see org.apache.geode.distributed.ConfigurationProperties
* @since 1.0.0
*/
@Getter
@SuppressWarnings("unused")
public class ApacheGeodeApplication implements Runnable {
protected static final String LOG_LEVEL = "INFO";
protected static final String USERS_REGION_NAME = "Users";
public static void main(String[] args) {
new ApacheGeodeApplication().run();
}
private static Properties newGemFireProperties(String name) {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty(ConfigurationProperties.NAME, name);
gemfireProperties.setProperty(ConfigurationProperties.LOG_LEVEL, LOG_LEVEL);
return gemfireProperties;
}
private static ClientCache newClientCache(Properties gemfireProperties) {
ClientCache clientCache = new ClientCacheFactory(gemfireProperties).create();
clientCache.setCopyOnRead(true);
return clientCache;
}
private static ClientCache newUsersRegion(ClientCache clientCache) {
clientCache.<Long, User>createClientRegionFactory(ClientRegionShortcut.LOCAL)
.create(USERS_REGION_NAME);
return clientCache;
}
private final GemFireCache cache;
public ApacheGeodeApplication() {
this(newUsersRegion(newClientCache(
newGemFireProperties(ApacheGeodeApplication.class.getSimpleName()))));
}
public ApacheGeodeApplication(GemFireCache cache) {
Assertions.assertNotNull(cache, "GemFireCache is required");
this.cache = cache;
}
@Override
public void run() {
run(getCache());
}
private void log(String message, Object... args) {
System.out.printf(message, args);
System.out.flush();
}
protected void run(GemFireCache cache) {
Region<Long, User> users = cache.getRegion(USERS_REGION_NAME);
User jonDoe = User.of("jonDoe").identifiedBy(1L);
log("Saving User [%s]...%n", jonDoe);
users.put(jonDoe.getId(), jonDoe);
log("Loading User with ID [%d]...%n", jonDoe.getId());
User loadedJonDoe = users.get(jonDoe.getId());
Assertions.assertNotNull(loadedJonDoe, "Loaded jonDoe was null");
Assertions.assertNotSame(loadedJonDoe, jonDoe);
Assertions.assertEquals(loadedJonDoe, jonDoe);
System.out.println("SUCCESS!!");
}
}
@Getter
@EqualsAndHashCode
@RequiredArgsConstructor(staticName = "of")
class User implements Serializable {
@Setter(AccessLevel.PRIVATE)
private Long id;
@lombok.NonNull
private final String name;
public User identifiedBy(Long id) {
setId(id);
return this;
}
@Override
public String toString() {
return getName();
}
}public class SomeObject { private String firstProp; private String secondProp; private LocalDate startDate; private List<CustomFirstObject> customFirstObjects;}public class CustomFirstObject { private List<CustomSecondObject> customSecondObjects;}public class CustomSecondObject { private Integer id; private Double amount;}PdxSerializer, especially if you have more than 1 complex domain model object/type hierarchy/composition. GemFire/Geode only allows you to register 1 PdxSerializer for all your PDX de/serialization needs, which means you need to get crafty (e.g. using the Composite design pattern) when handling multiple object type hierarchies/compositions, neither of which you need to do with GemFire/Geode's ReflectionAutoSerializer or SDG's MappingPdxSerializer. (edited) [info 2021/12/11 16:46:18.820 CST tazrslloc001 <main> tid=0x1] Adding webapp /pulse 2021-12-11 16:46:19,964 main ERROR Unrecognized format specifier [hexTid] 2021-12-11 16:46:19,965 main ERROR Unrecognized conversion specifier [hexTid] starting at position 90 in conversion pattern. 2021-12-11 16:46:19,971 main ERROR Unable to invoke factory method in class org.apache.geode.logging.log4j.internal.impl.GeodeConsoleAppender for element GeodeConsole: java.lang.IllegalStateException: No factory method found for class org.apache.geode.logging.log4j.internal.impl.GeodeConsoleAppender java.lang.IllegalStateException: No factory method found for class
. This makes it easier to find older conversations and maintain multiple/concurrent discussions on different topics.public class Example
{
private ClientCache cache;
private Region<String, Account> accountRegion;
private CqQuery accountCqQuery;
private boolean durable = false;
private void init() throws CqException, RegionNotFoundException, CqExistsException
{
// init cache, region, and CQ
// connect to the locator using default port 10334
this.cache = connectToLocallyRunningGeode();
// create a local region that matches the server region
this.accountRegion = cache.<String, Account>createClientRegionFactory(ClientRegionShortcut.PROXY)
.create("Account");
this.accountCqQuery = this.startCQ(this.cache, this.accountRegion);
this.cache.readyForEvents();
}
private void close() throws CqException {
// close the CQ and Cache
this.accountCqQuery.close();
this.cache.close();
}
public static void main(String[] args) throws Exception {
Example example = new Example();
example.init();
while (true)
{
try{
Thread.sleep(10000);
}
catch(Exception e){}
}
}
private CqQuery startCQ(ClientCache cache, Region region)
throws CqException, RegionNotFoundException, CqExistsException {
// Get cache and queryService - refs to local cache and QueryService
CqAttributesFactory cqf = new CqAttributesFactory();
cqf.addCqListener(new VMwareAccountListener());
CqAttributes cqa = cqf.create();
String cqName = "accountVMwareTracker";
String queryStr = "select * from /Account";
QueryService queryService = region.getRegionService().getQueryService();
CqQuery cqQuery = queryService.newCq(cqName, queryStr, cqa,durable);
cqQuery.execute();
System.out.println("------- CQ is running\n");
return cqQuery;
}
private ClientCache connectToLocallyRunningGeode() {
PdxSerializer serializer = new ReflectionBasedAutoSerializer(".*");
ClientCache cache = new ClientCacheFactory().
addPoolLocator("127.0.0.1", 10334)
.setPdxSerializer(serializer)
.set("durable-client-id","example")
.set("durable-client-timeout","9999")
.setPoolSubscriptionEnabled(true)
.set("log-level", "WARN")
.create();
return cache;
}
}
I have managed to lose access to AWS where I suspect I would find an archived version. Try #caching-rtm. With most the crew out for PTO it may take some time to find a copy.EnablePdx annotation has a serializerBeanName attribute (here) that allows you to refer to an Object (bean).By way of example:@SpringBootApplicaton
@ClientCacheApplication
@EnablePdx(serializerBeanName = "myCustomPdxSerializer")
class MySpringBootApplication {
@Bean("myCustomPdxSerializer")
PdxSerializer customPdxSerializer() {
return ...;
}
}@TimeToLiveExpiration) in order to declare custom expiration policies on application domain objects, somewhat fluently. In hindsight, I may have changed my mind about that approach now. The Spring portfolio is quite mixed about this approach anyway. I believe in strongly-typed things.Back on point, if you are say, defining Regions via SDG Annotations (e.g. @EnableEntityDefinedRegions when using SD Repositories, or the @EnableCachingDefinedRegions when caching), then this is where a Configurer, like the RegionConfigurer, comes into play.Since the Region (bean) definition and declaration is implicit (created by the framework), you do not typically have an explicit bean definition declared for the Region (e.g. like when declaring a bean of type ClientRegionFactoryBean in JavaConfig).However, that does not mean when using the Region defining SDG Annotations above that a Region FactoryBean of some type (e.g. ClientRegionFactoryBean) does not exist; it just isn't explicit.So, you declare a RegionConfigurer as a bean definition in Spring JavaConfig to get access to the desired Region bean def (and FactoryBean; e.g. here) to further customize it before initialization, such as adding CacheListeners, Loaders, Writers, etc.This is one of the many purposes of a Configurer, actually.Also, by declaring a Configurer as a bean definition/declaration in JavaConfig, you can also take advantage of Spring property placeholders and SpEL expressions using Spring's @Value annotation to the bean method in JavaConfig.For example:@EnableEntityDefinedRegions(basePackageClasses= ...)
class MySpringBootApplication {
@Bean("MyConfigurer")
RegionConfigurer addCacheListenerToRegion(
List<CacheListener> cacheListenerBeans,
@Value("${property.placeholder}) Integer value,
@Value("<SpEL expression>") Instant dateTime) {
return new RegionConfigurer() {
public void configure(String beanName, ClientRegionFactoryBean bean) {
bean.addCacheListener(cacheListenerBeans);
// perhaps use the @Value injected Integer value and date/time value in some way.
}
}
}List of CacheListener beans to the addCacheListenerToRegion bean method (in JavaConfig) will be all CacheListeners defined in the Spring context. Of course, you can inject 1, or several, specific listeners based on qualification, etc. Whatever you want.It is appropriate to name the Region bean (i.e. bean name) after the Region. Then you can filter the Region beans to effect in the Configurer by name, given the beanName is passed to the Configurer for the Region bean along with the FactoryBean used to configure the bean for the Region in Spring.With Spring, you are only limited by your imagination. It truly is innovate and flexible in its approach (for everything) and Open for extension, Closed for modification.Anyway, hopefully you get the idea! (edited) region.containsKey(key) && region.get(key).equals(value), don't do the put, right? (edited) Index of gemfire/io/pivotal/gemfire/geode-core/9.10.14 Name Last modified Size ../ geode-core-9.10.14-javadoc.jar 14-Jan-2022 18:05 2.51 MB geode-core-9.10.14-sources.jar 14-Jan-2022 18:05 7.70 MB geode-core-9.10.14.jar 14-Jan-2022 18:05 10.69 MB geode-core-9.10.14.pom 14-Jan-2022 18:05 8.49 KB
java.lang.UnsupportedOperationException: Use Pool APIs for doing operations when multiuser-secure-mode-enabled is set to true.. I googled this and found an issue for this that was fixed in 2018. They are using gemfire 9.10.13, which should include this fix. There’s something about using RegionService rather than Pool for APIs referenced in a bug for spring-data-geode that is not currently fixed. Is there something about the way they are using the API that could be triggering this? The code was originally written for gem 8.Properties properties = new Properties();
properties.setProperty("security-username", username);
properties.setProperty("security-password", password);
RegionService regionService = this.cache.createAuthenticatedView(properties);
Region region = regionService.getRegion(regionName);region.put...

[info 2022/04/20 14:54:07.168 PDT connectserver <main> tid=0x1] received FindCoordinatorResponse(coordinator=127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000, fromView=true, viewId=0, registrants=[192.168.110.167(connectserver:11384):41000], senderId=127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000, network partition detection enabled=true, locators preferred as coordinators=true, view=View[127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000|0] members: [127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000]) from locator HostAndPort[/127.0.0.1:10336] [info 2022/04/20 14:54:07.169 PDT connectserver <main> tid=0x1] Locator's address indicates it is part of a distributed system so I will not become membership coordinator on this attempt to join [info 2022/04/20 14:54:07.169 PDT connectserver <main> tid=0x1] findCoordinator chose 127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000 out of these possible coordinators: [127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000] [info 2022/04/20 14:54:07.169 PDT connectserver <main> tid=0x1] Discovery state after looking for membership coordinator is locatorsContacted=1; findInViewResponses=0; alreadyTried=[]; registrants=[]; possibleCoordinator=127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000; viewId=0; hasContactedAJoinedLocator=true; view=View[127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000|0] members: [127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000]; responses=[] [info 2022/04/20 14:54:07.169 PDT connectserver <main> tid=0x1] found possible coordinator 127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000 [info 2022/04/20 14:54:07.169 PDT connectserver <main> tid=0x1] Attempting to join the distributed system through coordinator 127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000 using address 192.168.110.167(connectserver:11384):41000 [error 2022/04/20 14:54:07.176 PDT connectserver <main> tid=0x1] Exception caught while sending message java.net.BindException: Cannot assign requested address: Datagram send failed at java.base/java.net.TwoStacksPlainDatagramSocketImpl.send0(Native Method) at java.base/java.net.AbstractPlainDatagramSocketImpl.send(AbstractPlainDatagramSocketImpl.java:155)
gfsh>start locator --name=connectlocator --port=10336 --bind-address=127.0.0.1 Starting a Geode Locator in C:\Users\steve\connectlocator... ................... Locator in C:\Users\steve\connectlocator on 127.0.0.1[10336] as connectlocator is currently online. Process ID: 30784 Uptime: 11 seconds Geode Version: 1.14.4 Java Version: 11.0.14.1 Log File: C:\Users\steve\connectlocator\connectlocator.log JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 Class-Path: C:\Users\steve\bin\apache-geode-1.14.4\lib\geode-core-1.14.4.jar;C:\Users\steve\bin\apache-geode-1.14.4\lib\geode-dependencies.jar Successfully connected to: JMX Manager [host=host.docker.internal, port=1099] Cluster configuration service is up and running.
start server --name=connectserver --server-port=40405 --server-bind-address=127.0.0.1 --bind-address=127.0.0.1
[info 2022/04/20 14:54:07.168 PDT connectserver <main> tid=0x1] received FindCoordinatorResponse(coordinator=127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000, fromView=true, viewId=0, registrants=[192.168.110.167(connectserver:11384):41000], senderId=127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000, network partition detection enabled=true, locators preferred as coordinators=true, view=View[127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000|0] members: [127.0.0.1(connectlocator:30784:locator)<ec><v0>:41000]) from locator HostAndPort[/127.0.0.1:10336] [in…
1.15.0 build snapshots (build.1116) are corrupted.1.15.0 from https://maven.apachegeode-ci.info/snapshots (see configuration in POM):... Downloading from geode-snapshot: https://maven.apachegeode-ci.info/snapshots/org/apache/geode/geode-wan/1.15.0-build.1116/geode-wan-1.15.0-build.1116.pom Downloaded from geode-snapshot: https://maven.apachegeode-ci.info/snapshots/org/apache/geode/geode-wan/1.15.0-build.1116/geode-wan-1.15.0-build.1116.pom (3.3 kB at 13 kB/s)
start server
2022-05-08 20:26:06.158 WARN 42263 --- [ StatSampler] o.a.geode.internal.stats50.VMStats50 : Unable to make public long com.sun.management.internal.OperatingSystemImpl.getProcessCpuTime() accessible: module jdk.management does not "opens com.sun.management.internal" to unnamed module @4b56f1a9
[info 2022/05/08 20:26:06.767 PDT connectserver <Client Queue Initialization Thread 1> tid=0x6b] Entry expiry tasks disabled because the queue became primary. Old messageTimeToLive was: 180 [warn 2022/05/08 20:26:06.777 PDT connectserver <ServerConnection on port 40405 Thread 9> tid=0x8b] Exception on server while executing function: null java.lang.ClassNotFoundException: org.springframework.data.gemfire.client.function.ListRegionsOnServerFunction at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
VMStats50. stuff.Entry expiry tasks disabled because the queue became primary. Old messageTimeToLive was: 180) is normal, not even an error or something to worry about.java.lang.ClassNotFoundException: org.springframework.data.gemfire.client.function.ListRegionsOnServerFunction), is thrown whenever a client using spring-boot-data-geode is connected to a server that was not started with spring-boot-data-geode. It can be ignored, though, as SDBG falls back to an internal native GemFire function whenever it can’t retrieve the list of regions through ListRegionsOnServerFunction.@EnableClusterDefinedRegions annotation if I remember correctly but, again, it can be safely ignored as spring automatically catches the exception and re-tries with a native GemFire function (that is, an internal function that’s shipped with GemFire itself).start locator --name=localhost --bind-address=127.0.0.1 --hostname-for-clients=127.0.0.1 --http-service-bind-address=127.0.0.1
start server --name=server1 --server-bind-address=127.0.0.1 --hostname-for-clients=127.0.0.1 --jmx-manager-hostname-for-clients=127.0.0.1 --bind-address=127.0.0.1 --http-service-bind-address=127.0.0.1 --locators=127.0.0.1[10334]
show metrics --region --member.query --query='select count(*) from /RegionName' to the newly added cache server and execute it, if the number is the same as the old cache server, then the replicate region distribution is completed.Thread Name <Pooled Waiting Message Processor 159> state <WAITING> Waiting on <java.util.concurrent.locks.ReentrantLock$NonfairSync@8e49c40> Owned By <ServerConnection on port 7021 Thread 94606> with ID <155133> Executor Group <PooledExecutorWithDMStats> Monitored metric <ResourceManagerStats.numThreadsStuck> Thread stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:837) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:872) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1202) java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:213) java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290) org.apache.geode.internal.cache.TXManagerImpl.getLock(TXManagerImpl.java:943) org.apache.geode.internal.cache.TXManagerImpl.masqueradeAs(TXManagerImpl.java:901) org.apache.geode.internal.cache.TXMessage.process(TXMessage.java:95) org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376) org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:441) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:441) org.apache.geode.distributed.internal.ClusterOperationExecutors.doWaitingThread(ClusterOperationExecutors.java:410) org.apache.geode.distributed.internal.ClusterOperationExecutors$$Lambda$185/1594722615.invoke(Unknown Source) org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) org.apache.geode.logging.internal.executors.LoggingThreadFactory$$Lambda$181/1759046479.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Lock owner thread stack sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039) java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1332) java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72) org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:733) org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:804) org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:781) org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:867) org.apache.geode.internal.cache.TXCommitMessage$CommitReplyProcessor.waitForCommitCompletion(TXCommitMessage.java:2166) org.apache.geode.internal.cache.TXCommitMessage.send(TXCommitMessage.java:435) org.apache.geode.internal.cache.TXState.commit(TXState.java:504) org.apache.geode.internal.cache.TXStateProxyImpl.commit(TXStateProxyImpl.java:237) org.apache.geode.internal.cache.TXManagerImpl.commit(TXManagerImpl.java:430) org.apache.geode.internal.cache.tier.sockets.command.CommitCommand.commitTransaction(CommitCommand.java:97) org.apache.geode.internal.cache.tier.sockets.command.CommitCommand.cmdExecute(CommitCommand.java:85) org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:177) org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:676) org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$$Lambda$405/1639778373.invoke(Unknown Source) org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) org.apache.geode.logging.internal.executors.LoggingThreadFactory$$Lambda$181/1759046479.run(Unknown Source) java.lang.Thread.run(Thread.java:750)
[warn 2022/07/21 23:52:47.869 GMT-05:00 gfcache.iapp2008.randolph.ms.com.7021 <ServerConnection on port 7021 Thread 94606> tid=0x25dfd] 15 seconds have elapsed while waiting for replies: <TXCommitMessage$CommitReplyProcessor 66165855 waiting for 1 replies from [10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005]> on 10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009 whose current membership list is: [[10.82.14.83(gflocator.ivapp1226781.randolph.ms.com.7020:16161:locator)<ec><v5>:41008, 10.113.24.87(gflocator.ivapp1226779.howard.ms.com.7020:10228:locator)<ec><v0>:41010, 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005, 10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]]
[warn 2022/07/21 23:54:10.309 GMT-05:00 gfcache.iapp2013.howard.ms.com.7021 <ServerConnection on port 7021 Thread 99108> tid=0x2bfef] 15 seconds have elapsed while waiting for replies: <DLockRequestProcessor 37095343 waiting for 1 replies from [10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]> on 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005 whose current membership list is: [[10.82.14.83(gflocator.ivapp1226781.randolph.ms.com.7020:16161:locator)<ec><v5>:41008, 10.113.24.87(gflocator.ivapp1226779.howard.ms.com.7020:10228:locator)<ec><v0>:41010, 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005, 10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]] [warn 2022/07/21 23:54:41.309 GMT-05:00 gfcache.iapp2013.howard.ms.com.7021 <ServerConnection on port 7021 Thread 99120> tid=0x2bffe] 15 seconds have elapsed while waiting for replies: <DLockReleaseProcessor 37095610 waiting for 1 replies from [10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]> on 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005 whose current membership list is: [[10.82.14.83(gflocator.ivapp1226781.randolph.ms.com.7020:16161:locator)<ec><v5>:41008, 10.113.24.87(gflocator.ivapp1226779.howard.ms.com.7020:10228:locator)<ec><v0>:41010, 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005, 10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]] [info 2022/07/21 23:54:41.346 GMT-05:00 gfcache.iapp2013.howard.ms.com.7021 <ServerConnection on port 7021 Thread 99108> tid=0x2bfef] DLockRequestProcessor wait for replies completed [info 2022/07/21 23:54:51.138 GMT-05:00 gfcache.iapp2013.howard.ms.com.7021 <ServerConnection on port 7021 Thread 99120> tid=0x2bffe] DLockReleaseProcessor wait for replies completed [warn 2022/07/21 23:55:06.261 GMT-05:00 gfcache.iapp2013.howard.ms.com.7021 <ServerConnection on port 7021 Thread 99125> tid=0x2c006] 15 seconds have elapsed while waiting for replies: <TXRemoteCommitMessage$RemoteCommitResponse 37095614 waiting for 1 replies from [10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]> on 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005 whose current membership list is: [[10.82.14.83(gflocator.ivapp1226781.randolph.ms.com.7020:16161:locator)<ec><v5>:41008, 10.113.24.87(gflocator.ivapp1226779.howard.ms.com.7020:10228:locator)<ec><v0>:41010, 10.113.74.75(gfcache.iapp2013.howard.ms.com.7021:169570)<v3>:41005, 10.114.72.151(gfcache.iapp2008.randolph.ms.com.7021:65582)<v2>:41009]]
ClientCache instance is connected?For example, I am not sure if it is possible using the ClientCache's DistributedSystem instance to inspect the remote (cluster) peer members (e.g. by name). The getAllOtherMembers() method always returns a Set of size 0 implying that the DistributedSystem just pertains to the client. However, I am more interested in the cluster (or DS of the cluster) to which the client is connected. Having to resort to Functions for this is highly inconvenient.Pool's getOnlineLocators() method and getServers() method will reflect the structure of the server cluster regardless of how I configured the client Pool initially??For instance, if I only configured the client Pool to connect to LocatorOne, will getOnlineLocators() eventually reflect that there are actually 2 Locators in the cluster (LocatorOne and LocatorTwo) and will getServers() pick up on CacheServerOne?
(edited) create region --name my-region --region-time-to-live-expiration 30 --enable-statistics --type REPLICATE --if-not-exists
describe region --name my-region
---
Name : my-region
Data Policy : replicate
Hosting Members : gemfire-geode-server-0
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | --------------------------- | ---------------
Region | data-policy | REPLICATE
| region-time-to-live.timeout | 30
| size | 0
| statistics-enabled | true
| scope | distributed-ackRegion region = clientCache.getRegion("my-region");
if (region == null) {
region = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY).create("my-region");
}region.getAttributes().getRegionTimeToLive().getTimeout() or even region.getAttributes() contains no data (edited) region.getRegionAttributes().getRegionTimeToLive() , you are getting the TTL configured on the client. In your case, your client region does not have a TTL.What I’m not quite sure is the best way to get the TTL configured on the server from the client. One option is that you could write a function that runs on the server that could return the TTL to the client.org.apache.geode.cache.execute.FunctionException: org.apache.geode.cache.execute.FunctionException: IOException while sending the last chunk to client at com.citi.ntq.financial.gemfire.functions.FinancialDataAwareFunction.lambda$execute$0(FinancialDataAwareFunction.java:79) at com.citi.ntq.financial.utils.TimeTracker.track(TimeTracker.java:36) at com.citi.ntq.financial.gemfire.functions.FinancialDataAwareFunction.execute(FinancialDataAwareFunction.java:70) at org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:328) at org.apache.geode.internal.cache.execute.AbstractExecution.lambda$executeFunctionOnLocalPRNode$0(AbstractExecution.java:273) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.geode.distributed.internal.ClusterOperationExecutors.runUntilShutdown(ClusterOperationExecutors.java:441) at org.apache.geode.distributed.internal.ClusterOperationExecutors.doFunctionExecutionThread(ClusterOperationExecutors.java:376) at org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) at java.lang.Thread.run(Thread.java:750)Caused by: org.apache.geode.cache.execute.FunctionException: IOException while sending the last chunk to client at org.apache.geode.internal.cache.execute.ServerToClientFunctionResultSender65.lastResult(ServerToClientFunctionResultSender65.java:164) at org.apache.geode.internal.cache.execute.PartitionedRegionFunctionResultSender.lastClientSend(PartitionedRegionFunctionResultSender.java:377) at org.apache.geode.internal.cache.execute.PartitionedRegionFunctionResultSender.lastResult(PartitionedRegionFunctionResultSender.java:157) at com.citi.ntq.financial.gemfire.functions.FinancialDataAwareFunction.lambda$execute$0(FinancialDataAwareFunction.java:73) ... 10 moreCaused by: org.apache.geode.internal.cache.tier.sockets.MessageTooLargeException: Message size (1375784172) exceeds gemfire.client.max-message-size setting (1073741824) at org.apache.geode.internal.cache.tier.sockets.Message.sendBytes(Message.java:605) at org.apache.geode.internal.cache.tier.sockets.ChunkedMessage.sendChunk(ChunkedMessage.java:312) at org.apache.geode.internal.cache.tier.sockets.ChunkedMessage.sendChunk(ChunkedMessage.java:322) at org.apache.geode.internal.cache.execute.ServerToClientFunctionResultSender65.lastResult(ServerToClientFunctionResultSender65.java:158) ... 13 more (edited)
I know GemFire includes the ipsec add on - but does the product also include the AntiVirus and FIM addons? I can see these are regularly downloaded by the pcf-gemfire@pivotal.io account. Just wanting to understand our upstream dependancies. (edited) 
A Friendly Reminder!, 
Wednesday Sep 7th at 2 PM Eastern!2022-09-19 13:47:01.159 WARN 1 --- [ ThreadsMonitor] o.a.g.i.m.ThreadsMonitoringProcess : Thread 309 (0x135) is stuck 2022-09-19 13:47:01.160 WARN 1 --- [ ThreadsMonitor] o.a.g.i.m.executor.AbstractExecutor : Thread <309> (0x135) that was executed at <19 Sep 2022 13:46:23 UTC> has been stuck for <37.68 seconds> and number of thread monitor iteration <1> Thread Name <Function Execution Processor5> state <TIMED_WAITING> Waiting on <java.util.concurrent.CountDownLatch$Sync@3bf68836> Executor Group <FunctionExecutionPooledExecutor> Monitored metric <ResourceManagerStats.numThreadsStuck> Thread stack for "Function Execution Processor5" (0x135): java.lang.ThreadState: TIMED_WAITING at java.base@11.0.16/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11.0.16/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@11.0.16/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1079) at java.base@11.0.16/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369) at java.base@11.0.16/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278) at app//org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.awaitWithCheck(StoppableCountDownLatch.java:120) at app//org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:93) at app//org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:685) at app//org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:795) at app//org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:772) at app//org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:858) at app//org.apache.geode.internal.cache.partitioned.PartitionMessage$PartitionResponse.waitForCacheException(PartitionMessage.java:815) at app//org.apache.geode.internal.cache.partitioned.GetMessage$GetResponse.waitForResponse(GetMessage.java:586) at app//org.apache.geode.internal.cache.PartitionedRegion.getRemotely(PartitionedRegion.java:4865) at app//org.apache.geode.internal.cache.PartitionedRegion.getFromBucket(PartitionedRegion.java:4165) at app//org.apache.geode.internal.cache.PartitionedRegion.findObjectInSystem(PartitionedRegion.java:3552) at app//org.apache.geode.internal.cache.PartitionedRegionDataView.findObject(PartitionedRegionDataView.java:69) at app//org.apache.geode.internal.cache.PartitionedRegion.get(PartitionedRegion.java:3337) at app//org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1309) at app//org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:451) at app//com.vmware.server.service.AggregateService.performGroupAggregate(AggregateService.java:102) at app//com.vmware.server.service.AggregateService.performDailyGroupAggregate(AggregateService.java:87) at app//com.vmware.server.service.AggregateService.lambda$doPerformAggregates$2(AggregateService.java:72) at app//com.vmware.server.service.AggregateService$$Lambda$1238/0x00007fe0cde69960.accept(Unknown Source) at java.base@11.0.16/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.base@11.0.16/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base@11.0.16/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base@11.0.16/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base@11.0.16/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base@11.0.16/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base@11.0.16/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base@11.0.16/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base@11.0.16/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) at app//com.vmware.server.service.AggregateService.doPerformAggregates(AggregateService.java:71) at app//com.vmware.server.service.AggregateService.lambda$performAggregates$0(AggregateService.java:53) at app//com.vmware.server.service.AggregateService$$Lambda$1226/0x00007fe0cde094b0.run(Unknown Source) at app//io.micrometer.core.instrument.AbstractTimer.record(AbstractTimer.java:171) at app//com.vmware.server.service.AggregateService.performAggregates(AggregateService.java:53) at app//com.vmware.server.function.UsageEventFunction$$Lambda$1219/0x00007fe0cdde9cb0.accept(Unknown Source) at java.base@11.0.16/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) Locked ownable synchronizers: - None 2022-09-19 13:47:01.160 WARN 1 --- [ ThreadsMonitor] o.a.g.i.m.ThreadsMonitoringProcess : There are 4 stuck threads in this node 2022-09-19 13:47:15.899 INFO 1 --- [ 40404 Thread 1] o.a.g.d.internal.ReplyProcessor21 : PRFunctionStreamingResultCollector wait for replies completed
/ at the beginning). So perhaps try with: GemFire:type=Distributed,service=Region,name=/testRegionC,member=server1[info 2022/11/03 14:23:41.392 EDT redisServer1 <GeodeRedisServer-WorkerThread-1> tid=0xa1] Initialization of region _B__GEMFIRE__FOR__REDIS_127 completed [warn 2022/11/03 14:23:41.392 EDT redisServer1 <GeodeRedisServer-WorkerThread-1> tid=0xa1] Configured redundancy level could not be satisfied. Advise you to start enough data store nodes to satisfy redundancy for the region. Partitioned Region name = /GEMFIRE_FOR_REDIS Redundancy level set to 1 . Number of available data stores: 1 . Number successfully allocated = 1 . Data stores: [127.0.0.1(redisServer1:78400)<v1>:41001] . Data stores successfully allocated: [127.0.0.1(redisServer1:78400)<v1>:41001] . Equivalent members: [127.0.0.1(locator:78262:locator)<ec><v0>:41000, 127.0.0.1(redisServer1:78400)<v1>:41001] [info 2022/11/03 14:25:22.105 EDT redisServer1 <GeodeRedisServer-WorkerThread-1> tid=0xa1] Instantiator registered with id 1047 class com.vmware.gemfire.redis.internal.data.NullRedisData
ERROR 78671 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: MOVED 12740 192.168.1.76:6379] with root cause
io.lettuce.core.RedisCommandExecutionException: MOVED 12740 192.168.1.76:6379
at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:147) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:116) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:63) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:747) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:682) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:599) ~[lettuce-core-6.1.9.RELEASE.jar!/:6.1.9.RELEASE]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[netty-transport-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.79.Final.jar!/:4.1.79.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.79.Final.jar!/:4.1.79.Final]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]gf-awsamenval00ev.jnj.com-server-s1-11-05-2022-1.log:[2022-11-05 00:01:27,038 WARN ReplyProcessor21.timeout] 15 seconds have elapsed while waiting for replies: <DLockRequestProcessor 190697 waiting for 1 replies from [10.59.170.70(awsamenval00eu.jnj.com-s-s1:32122)<v3>:41000]> on 10.59.170.22(awsamenval00ev.jnj.com-s-s1:29528)<v4>:41000 whose current membership list is: [[10.59.170.22(awsamenval00ev.jnj.com-s-s1:29528)<v4>:41000, 10.59.170.246(awsamenval00f0.jnj.com-s-s1:29114)<v8>:41000, 10.59.170.55(awsamenval00et.jnj.com-s-s1:42911)<v3>:41000, 10.59.170.87(awsamenval00ew.jnj.com-s-s1:32595)<v5>:41000, 10.59.170.113(awsamenval00f1.jnj.com-s-s1:39415)<v9>:41000, 10.59.170.108(awsamenval00ek.jnj.com-l-node0:38625:locator)<ec><v0>:41000, 10.59.170.37(awsamenval00es.jnj.com-s-s1:25377)<v2>:41000, 10.59.170.133(awsamenval00f2.jnj.com-s-s1:31627)<v10>:41000, 10.59.170.70(awsamenval00eu.jnj.com-s-s1:32122)<v3>:41000, 10.59.170.134(awsamenval00ey.jnj.com-s-s1:28997)<v6>:41000, 10.59.170.252(awsamenval00el.jnj.com-l-node0:43205:locator)<ec><v1>:41000, 10.59.170.185(awsamenval00ex.jnj.com-s-s1:40323)<v5>:41000, 10.59.170.219(awsamenval00ez.jnj.com-s-s1:36597)<v7>:41000,
gf-awsamesgpl003w.jnj.com-server-s1-11-04-2022-1.log:[2022-11-04 13:31:09,475 WARN HostStatSampler.checkElapsedSleepTime] Statistics sampling thread detected a wakeup delay of 3221 ms, indicating a possible resource issue. Check the GC, memory, and CPU statistics. gf-awsamesgpl0042.jnj.com-server-s1.log:[2022-11-09 05:50:02,356 WARN HostStatSampler.checkElapsedSleepTime] Statistics sampling thread detected a wakeup delay of 3069 ms, indicating a possible resource issue. Check the GC, memory, and CPU statistics.
Will GemFire 9.10.13 work on AWS? If not supported, will it work with later versions? Note: SBI SECURITIES is considering migration to AWS, so please confirm the above.
apiVersion: gemfire.vmware.com/v1
kind: GemFireCluster
metadata:
name: gemfire-cluster
spec:
image: registry.tanzu.vmware.com/pivotal-gemfire/vmware-gemfire:9.15.0
servers:
overrides:
gemfireProperties:
start-dev-rest-api: "true"error: error validating "STDIN": error validating data: ValidationError(GemFireCluster.spec.servers.overrides): unknown field "gemfireProperties" in com.vmware.gemfire.v1.GemFireCluster.spec.servers.overrides; if you choose to ignore these errors, turn validation off with --validate=false
gemfireProperties and gemFireProperties as per CRD reference, both to no avail (edited) apiVersion: gemfire.vmware.com/v1
kind: GemFireCluster
metadata:
name: gemfire-cluster
spec:
image: registry.tanzu.vmware.com/pivotal-gemfire/vmware-gemfire:9.15.0
servers:
overrides:
gemFireProperties:
- name : "start-dev-rest-api"
value : "true"
It is now available on Tanzu Net to download.What’s New in VMware GemFire for Tanzu Application Service 1.14.6PCC_CLUSTER-READ-ONLY security role. This role has CLUSTER:READ, DATA:READ permissions.“Service instance metrics will not be not retrievable unless users deselect the Enable Log Cache Syslog Ingestion checkbox in the System Logging pane of the Tanzu Application Service for VMs tile in Ops Manager.”
parallel="false".<gateway-sender id="sender1" parallel="false" remote-distributed-system-id="2" disk-store-name="dataStore">2022-12-20 02:10:29.397 [ThreadsMonitor] WARN org.apache.geode.internal.monitoring.ThreadsMonitoringProcess - Thread 646 (0x286) is stuck 2022-12-20 02:10:29.413 [ThreadsMonitor] WARN org.apache.geode.internal.monitoring.executor.AbstractExecutor - Thread <646> (0x286) that was executed at <20 Dec 2022 02:09:16 CST> has been stuck for <72.891 seconds> and number of thread monitor iteration <1> Thread Name <Pooled Serial Message Processor2-1> state <WAITING> Waiting on <java.util.concurrent.locks.ReentrantReadWriteLock$FairSync@1ce290a7> Owned By <ServerConnection on port 42973 Thread 63> with ID <930> Executor Group <SerialQueuedExecutorWithDMStats> Monitored metric <ResourceManagerStats.numThreadsStuck> Thread stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderQueue.put(SerialGatewaySenderQueue.java:223) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderEventProcessor.queuePrimaryEvent(SerialGatewaySenderEventProcessor.java:477) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderEventProcessor.enqueueEvent(SerialGatewaySenderEventProcessor.java:445) org.apache.geode.internal.cache.wan.serial.ConcurrentSerialGatewaySenderEventProcessor.enqueueEvent(ConcurrentSerialGatewaySenderEventProcessor.java:162) org.apache.geode.internal.cache.wan.serial.ConcurrentSerialGatewaySenderEventProcessor.enqueueEvent(ConcurrentSerialGatewaySenderEventProcessor.java:116) org.apache.geode.internal.cache.wan.AbstractGatewaySender.distribute(AbstractGatewaySender.java:1082) org.apache.geode.internal.cache.LocalRegion.notifyGatewaySender(LocalRegion.java:6141) org.apache.geode.internal.cache.LocalRegion.basicPutPart2(LocalRegion.java:5777) org.apache.geode.internal.cache.map.RegionMapPut.doBeforeCompletionActions(RegionMapPut.java:282) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutAndDeliverEvent(AbstractRegionMapPut.java:301) org.apache.geode.internal.cache.map.AbstractRegionMapPut$$Lambda$258/445977285.run(Unknown Source) org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWithIndexUpdatingInProgress(AbstractRegionMapPut.java:308) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutIfPreconditionsSatisified(AbstractRegionMapPut.java:296) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnSynchronizedRegionEntry(AbstractRegionMapPut.java:282) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutOnRegionEntryInMap(AbstractRegionMapPut.java:273) org.apache.geode.internal.cache.map.AbstractRegionMapPut.addRegionEntryToMapAndDoPut(AbstractRegionMapPut.java:251) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPutRetryingIfNeeded(AbstractRegionMapPut.java:216) org.apache.geode.internal.cache.map.AbstractRegionMapPut$$Lambda$256/2071190751.run(Unknown Source) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doWithIndexInUpdateMode(AbstractRegionMapPut.java:198) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPut(AbstractRegionMapPut.java:180) org.apache.geode.internal.cache.map.AbstractRegionMapPut$$Lambda$255/113991150.run(Unknown Source) org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWhileLockedForCacheModification(AbstractRegionMapPut.java:119) org.apache.geode.internal.cache.map.RegionMapPut.runWhileLockedForCacheModification(RegionMapPut.java:161) org.apache.geode.internal.cache.map.AbstractRegionMapPut.put(AbstractRegionMapPut.java:169) org.apache.geode.internal.cache.AbstractRegionMap.basicPut(AbstractRegionMap.java:2044) org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5602) org.apache.geode.internal.cache.DistributedRegion.virtualPut(DistributedRegion.java:387) org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:170) org.apache.geode.internal.cache.LocalRegion.basicUpdate(LocalRegion.java:5573) org.apache.geode.internal.cache.AbstractUpdateOperation.doPutOrCreate(AbstractUpdateOperation.java:156) org.apache.geode.internal.cache.AbstractUpdateOperation$AbstractUpdateMessage.basicOperateOnRegion(AbstractUpdateOperation.java:307) org.apache.geode.internal.cache.DistributedPutAllOperation$PutAllMessage.doEntryPut(DistributedPutAllOperation.java:1114) org.apache.geode.internal.cache.DistributedPutAllOperation$PutAllMessage$1.run(DistributedPutAllOperation.java:1194) org.apache.geode.internal.cache.event.DistributedEventTracker.syncBulkOp(DistributedEventTracker.java:481) Lock owner thread stack sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) org.apache.geode.internal.util.concurrent.StoppableCountDownLatch.await(StoppableCountDownLatch.java:72) org.apache.geode.distributed.internal.ReplyProcessor21.basicWait(ReplyProcessor21.java:731) org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:802) org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:779) org.apache.geode.distributed.internal.ReplyProcessor21.waitForRepliesUninterruptibly(ReplyProcessor21.java:865) org.apache.geode.internal.cache.DistributedCacheOperation.waitForAckIfNeeded(DistributedCacheOperation.java:779) org.apache.geode.internal.cache.DistributedCacheOperation._distribute(DistributedCacheOperation.java:676) org.apache.geode.internal.cache.DistributedCacheOperation.startOperation(DistributedCacheOperation.java:277) org.apache.geode.internal.cache.DistributedCacheOperation.distribute(DistributedCacheOperation.java:318) org.apache.geode.internal.cache.DistributedRegion.distributeUpdate(DistributedRegion.java:514) org.apache.geode.internal.cache.DistributedRegion.basicPutPart3(DistributedRegion.java:492) org.apache.geode.internal.cache.map.RegionMapPut.doAfterCompletionActions(RegionMapPut.java:307) org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPut(AbstractRegionMapPut.java:185) org.apache.geode.internal.cache.map.AbstractRegionMapPut$$Lambda$255/113991150.run(Unknown Source) org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWhileLockedForCacheModification(AbstractRegionMapPut.java:119) org.apache.geode.internal.cache.map.RegionMapPut.runWhileLockedForCacheModification(RegionMapPut.java:161) org.apache.geode.internal.cache.map.AbstractRegionMapPut.put(AbstractRegionMapPut.java:169) org.apache.geode.internal.cache.AbstractRegionMap.basicPut(AbstractRegionMap.java:2044) org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5602) org.apache.geode.internal.cache.DistributedRegion.virtualPut(DistributedRegion.java:387) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderQueue$SerialGatewaySenderQueueMetaRegion.virtualPut(SerialGatewaySenderQueue.java:1215) org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5580) org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:156) org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5038) org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1637) org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1624) org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:442) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderQueue.putAndGetKey(SerialGatewaySenderQueue.java:245) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderQueue.put(SerialGatewaySenderQueue.java:232) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderEventProcessor.queuePrimaryEvent(SerialGatewaySenderEventProcessor.java:477) org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderEventProcessor.enqueueEvent(SerialGatewaySenderEventProcessor.java:445) org.apache.geode.internal.cache.wan.serial.ConcurrentSerialGatewaySenderEventProcessor.enqueueEvent(ConcurrentSerialGatewaySenderEventProcessor.java:162) org.apache.geode.internal.cache.wan.serial.ConcurrentSerialGatewaySenderEventProcessor.enqueueEvent(ConcurrentSerialGatewaySenderEventProcessor.java:116) org.apache.geode.internal.cache.wan.AbstractGatewaySender.distribute(AbstractGatewaySender.java:1082) org.apache.geode.internal.cache.LocalRegion.notifyGatewaySender(LocalRegion.java:6141) 2022-12-20 02:43:13.252 [Thread-10] INFO org.apache.geode.distributed.internal.ReplyProcessor21 - DistributedCacheOperation$CacheOperationReplyProcessor wait for replies completed
5 months ago
Is possible to deploy a custom web application into the gemfire just like pulse? Can VMWare Gemfire team provide any example/code/use cases for JPMC team to refer to?
java -jar vmware-gemfire-management-console-1.0.0-beta.1.jar . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.7.6) 2023-01-30 14:05:10.977 INFO 29752 --- [ main] c.v.g.gmc.GemFireGuiBackendApplication : Starting GemFireGuiBackendApplication using Java 17.0.4.1 on gregoryg7PCFV.vmware.com with PID 29752 (/Users/devtools/repositories/IMDG/gemfire/gideon-console/vmware-gemfire-management-console-1.0.0-beta.1.jar started by gregoryg in /Users/devtools/repositories/IMDG/gemfire/gideon-console) 2023-01-30 14:05:10.990 INFO 29752 --- [ main] c.v.g.gmc.GemFireGuiBackendApplication : No active profile set, falling back to 1 default profile: "default" 2023-01-30 14:05:15.260 INFO 29752 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data Gemfire repositories in DEFAULT mode. 2023-01-30 14:05:16.307 INFO 29752 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 1027 ms. Found 3 Gemfire repository interfaces. 2023-01-30 14:05:16.969 ERROR 29752 --- [ main] o.s.boot.SpringApplication : Application run failed java.lang.reflect.InaccessibleObjectException: Unable to make field private final long java.util.UUID.mostSigBits accessible: module java.base does not "opens java.util" to unnamed module @18e8568
2023-01-30 14:48:02.867 ERROR 42078 --- [nio-7077-exec-3] c.v.g.gmc.error.CustomExceptionHandler : error dto : ErrorDto(statusCode=404, responseStatus=ERROR, errorMsg=javax.management.InstanceNotFoundException: GemFire:service=Region,name=/retail.stream.transaction-0,type=Member,member=server1 at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1088) at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:640) at java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:679) at java.management/com.sun.jmx.remote.security.MBeanServerAccessController.getAttribute(MBeanServerAccessController.java:324) at org.apache.geode.management.internal.web.controllers.ShellCommandsController.getAttribute(ShellCommandsController.java:136) at jdk.internal.reflect.GeneratedMethodAccessor445.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
Could not transfer artifact io.pivotal.gemfire:geode-core:pom:9.10.12 from/to gemfire-release-repo (https://commercial-repo.pivotal.io/data3/gemfire-release-repo/gemfire): Authentication failed for https://commercial-repo.pivotal.io/data3/gemfire-release-repo/gemfire/io/pivotal/gemfire/geode-core/9.10.12/geode-core-9.10.12.pom 401 Unauthorizedrecovery-delay setting.So as long as there is at least one member left that has recovery-delay set, that member will trigger a restore redundancy across the entire cluster.[error 2023/02/15 13:15:46.674 EST server1 <main> tid=0x1] Cache initialization for GemFireCache[id = 1548535364; isClosing = false; isShutDownAll = false; created = Wed Feb 15 13:15:44 EST 2023; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60] failed because:
java.lang.LinkageError: loader org.apache.geode.internal.classloader.DeployJarChildFirstClassLoader @441b8382 attempted duplicate class definition for nyla.solutions.core.util.Config. (nyla.solutions.core.util.Config is in unnamed module of loader org.apache.geode.internal.classloader.DeployJarChildFirstClassLoader @441b8382, parent loader org.apache.geode.internal.classloader.DeployJarChildFirstClassLoader @6fc6deb7)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1012)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
at java.base/java.net.URLClassLoader.defineClass(URLClassLoader.java:524)
at java.base/java.net.URLClassLoader$1.run(URLClassLoader.java:427)FAILED Server error, status code: 502, error code: 10001, message: Service broker error: The service broker has been updated, and this service instance is out of date. Please contact your operator.
[upgrade-all-service-instances] 2023/02/10 17:32:31.722650 [upgrade-all] [2c0079f9-4451-487e-bcf8-a21560e14da4] Result: operation accepted
[upgrade-all-service-instances] 2023/02/10 17:32:31.722665 [upgrade-all] [2c0079f9-4451-487e-bcf8-a21560e14da4] Waiting for operation to complete: bosh task id 788305
[upgrade-all-service-instances] 2023/02/10 17:37:51.410042 [upgrade-all] [684c7f51-bd82-434a-992c-e3cae36aa0be] Result: Service Instance operation failure
[upgrade-all-service-instances] 2023/02/10 17:42:02.790612 [upgrade-all] [2c0079f9-4451-487e-bcf8-a21560e14da4] Result: Service Instance operation failure
[upgrade-all-service-instances] 2023/02/10 17:46:09.973433 [upgrade-all] [bde33c79-497e-4d50-8219-18b04a61c54a] Result: Service Instance operation success
[upgrade-all-service-instances] 2023/02/10 17:48:12.721128 [upgrade-all] [ed75f31b-6d22-4fd0-99cd-e835bc0bb5cc] Result: Service Instance operation success
[upgrade-all-service-instances] 2023/02/10 17:48:12.721280 [upgrade-all] FINISHED PROCESSING Status: FAILED; Summary: Number of successful operations: 27; Number of skipped operations: 0; Number of service instance orphans detected: 0; Number of deleted instances before operation could happen: 0; Number of busy instances which could not be processed: 0; Number of service instances that failed to process: 2 [684c7f51-bd82-434a-992c-e3cae36aa0be, 2c0079f9-4451-487e-bcf8-a21560e14da4]
[upgrade-all-service-instances] 2023/02/10 17:48:12.721288 2 errors occurred:
* [684c7f51-bd82-434a-992c-e3cae36aa0be] Operation failed: bosh task id 788304: Failed for bosh task: 788304
* [2c0079f9-4451-487e-bcf8-a21560e14da4] Operation failed: bosh task id 788305: Failed for bosh task: 788305
Stderr Error: failed to run job-process: exit status 1 (exit status 1)
[severe 2023/02/22 09:58:04.105 UTC main tid=0x1] (msgTID=1 msgSN=5) Command can not be processed as Command Service did not get initialized. Reason: null
[severe 2023/02/22 09:58:04.105 UTC main tid=0x1] (msgTID=1 msgSN=6) Command can not be processed as Command Service did not get initialized. Reason: null
javax.management.JMRuntimeException: Command can not be processed as Command Service did not get initialized. Reason: null
at org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1232)
at org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:424)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at sun.reflect.misc.Trampoline.invoke(Unknown Source)
at jdk.internal.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at java.base/sun.reflect.misc.MethodUtil.invoke(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source)
at java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
at java.management/com.sun.jmx.remote.security.MBeanServerAccessController.invoke(Unknown Source)
at java.management.rmi/javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown Source)
at java.management.rmi/javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown Source)
at java.management.rmi/javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown Source)
at java.management.rmi/javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at java.rmi/sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
at java.rmi/sun.rmi.transport.Transport$1.run(Unknown Source)
at java.rmi/sun.rmi.transport.Transport$1.run(Unknown Source)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.rmi/sun.rmi.transport.Transport.serviceCall(Unknown Source)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(Unknown Source)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
It is now available on Tanzu Net to download.What’s New in VMware GemFire for Tanzu Application Service 1.14.7tdalsing@pivotal.io? If so, it seems everything should be in working order. If that's the correct account, I'd try resetting the password[warn 2023/03/20 08:47:43.001 GMT GemFireMarketDataCacheServer_LDNFI93SAUA <ThreadsMonitor> tid=0x22] Thread <96831> (0x17a3f) that was executed at <20 Mar 2023 08:21:40 GMT> has been stuck for <1560.246 seconds> and number of thread monitor iteration <26> Thread Name <ServerConnection on port 21001 Thread 46076> state <RUNNABLE> Executor Group <ServerConnectionExecutor> Monitored metric <ResourceManagerStats.numThreadsStuck> Thread stack for "ServerConnection on port 21001 Thread 46076" (0x17a3f): java.lang.ThreadState: RUNNABLE (in native) at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:378) at org.apache.geode.internal.net.NioPlainEngine.readAtLeast(NioPlainEngine.java:103) at org.apache.geode.internal.tcp.MsgReader.readAtLeast(MsgReader.java:130) at org.apache.geode.internal.tcp.MsgReader.readHeader(MsgReader.java:58) at org.apache.geode.internal.tcp.Connection.readAck(Connection.java:2731) at org.apache.geode.distributed.internal.direct.DirectChannel.readAcks(DirectChannel.java:401) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToMany(DirectChannel.java:351) at org.apache.geode.distributed.internal.direct.DirectChannel.sendToOne(DirectChannel.java:186) at org.apache.geode.distributed.internal.direct.DirectChannel.send(DirectChannel.java:521) at org.apache.geode.distributed.internal.DistributionImpl.directChannelSend(DistributionImpl.java:348) at org.apache.geode.distributed.internal.DistributionImpl.send(DistributionImpl.java:293) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendViaMembershipManager(ClusterDistributionManager.java:2074) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendOutgoing(ClusterDistributionManager.java:2002) at org.apache.geode.distributed.internal.ClusterDistributionManager.sendMessage(ClusterDistributionManager.java:2038) at org.apache.geode.distributed.internal.ClusterDistributionManager.putOutgoing(ClusterDistributionManager.java:1113) at org.apache.geode.internal.cache.DistributedCacheOperation._distribute(DistributedCacheOperation.java:556) at org.apache.geode.internal.cache.DistributedCacheOperation.startOperation(DistributedCacheOperation.java:267) at org.apache.geode.internal.cache.DistributedCacheOperation.distribute(DistributedCacheOperation.java:308) at org.apache.geode.internal.cache.DistributedRegion.distributeUpdate(DistributedRegion.java:517) at org.apache.geode.internal.cache.DistributedRegion.basicPutPart3(DistributedRegion.java:498) at org.apache.geode.internal.cache.map.RegionMapPut.doAfterCompletionActions(RegionMapPut.java:308) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.doPut(AbstractRegionMapPut.java:185) at org.apache.geode.internal.cache.map.AbstractRegionMapPut$$Lambda$316/1439127919.run(Unknown Source) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.runWhileLockedForCacheModification(AbstractRegionMapPut.java:119) at org.apache.geode.internal.cache.map.RegionMapPut.runWhileLockedForCacheModification(RegionMapPut.java:161) at org.apache.geode.internal.cache.map.AbstractRegionMapPut.put(AbstractRegionMapPut.java:169) at org.apache.geode.internal.cache.AbstractRegionMap.basicPut(AbstractRegionMap.java:2016) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5657) at org.apache.geode.internal.cache.DistributedRegion.virtualPut(DistributedRegion.java:393) at org.apache.geode.internal.cache.wan.serial.SerialGatewaySenderQueue$SerialGatewaySenderQueueMetaRegion.virtualPut(SerialGatewaySenderQueue.java:1384) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5635) at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) at org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5084) at org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1651) at org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1638) at org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) Locked ownable synchronizers: - None
Currently we are looking at upgrading RHEL to 9.x version. Do we need to update OS level parameters(limits.conf, sysctl.conf, …)? FYI current settings are provided by Pivotal and are working fine.


1)There are no recommendations on custom partition specific to TAS in the documentation but exists for standalone vms. Can you please share any recommendations for TAS if different from standalone however recommendations based on internal analysis is mentioned in point 2. 2)Based on the documentation, fixed custom partitioning may not be recommended for TAS and only standard custom partitioning option is recommended for below reasons. Please let us know if this is true(TAS specific in comments)
3)Standard partitioning doesn't have any example reference code in the documentation to configure it on the client side. Can you please provide an github example for one of the 3 options or all. 4)Using custom partitioning and colocation will give us better performance. Any internal bench marks available with and without these so we can compare the performance and provide the recommendation internally on the criteria to use these features.
# Problematic frame: # J 52276 C2 org.apache.geode.cache.query.internal.CompiledIn.evaluate(Lorg/apache/geode/cache/query/internal/ExecutionContext;)Ljava/lang/Object; (289 bytes) @ 0x00007f80170ecc93 [0x00007f80170ec340+0x953]
J 42813 C2 com.jpmorgan.scpp.referencedata.function.BaseRdcFunction.execute(Lorg/apache/geode/cache/execute/FunctionContext;)
J 38479 C2 com.jpmorgan.scpp.referencedata.function.GetInstrumentByTradingLineAlternateIdentifier.getQueryResult(Lorg/apache/geode/cache/execute/FunctionContext;Ljava/lang/String;)Ljava/util/List;
org.apache.geode.cache.query.internal.CompiledIn.evaluate(Lorg/apache/geode/cache/query/internal/ExecutionContext;)Ljava/lang/Object;
Internal exceptions (10 events): Event: 161869.611 Thread 0x00007f2fec1f3800 Exception <a 'java/lang/ArrayIndexOutOfBoundsException': 78> (0x00007f32f03615a0) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/interpreter/interpreterRuntime.cpp, line 366] Event: 161870.164 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d404c0) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.164 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d45b78) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.164 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d4c420) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.165 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d54550) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.165 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d5e610) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.165 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d65da0) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.165 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d6f2d8) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.166 Thread 0x00007f305d7b0000 Exception <a 'java/io/IOException'> (0x00007f32f0d766b0) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u191-mos/11902/hotspot/src/share/vm/prims/jni.cpp, line 709] Event: 161870.298 Thread 0x00007f2e503c5800 Implicit null exception at 0x00007f80170ecc93 to 0x0000000000000000
[warn 2023/04/11 00:58:01.363 UTC rd4-server <Function Execution Processor375> tid=0x5322] VALIDATOR: accountCAID: 34797 should start with a letter followed by digits
[warn 2023/04/11 01:01:03.806 UTC rd4-server <Function Execution Processor375> tid=0x5322] Special assets do not exist for arguments {profileTypeDomain=Country, identifierValue=A443328, sealDeploymentId=83731, sealApplicationId=101991, isWebServiceInd=N, profileTypeId=JP, userName=XXXX, sealEmailDl=AME_Custody_L3_Support@restricted.chase.com, identifierIdDomain=INSTRUMENT_JID}
[warn 2023/04/11 01:01:46.278 UTC rd4-server <ServerConnection on port 40411 Thread 8250> tid=0x4b48] Server connection from [identity(169.101.185.189(13916:loner):58039:5ad9b068,connection=1; port=35200]: connection disconnect detected by EOF.#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f80170ecc93, pid=3600800, tid=0x00007f2d02493700
#
>>My view is to go with multiple server instances in single node, any expert suggestion will be much appreciated
In our implementation we have deployed a function which polls mxbeans/jmx metrics. This function once started (which we do with Cache Server member start) it continues to run as long as Cache Server JVM is running and polls every 10s. We usually stop this function 1st before stopping a cache server member. I am wondering if this is even necessary.
/**
* @param intervalSeconds the number of seconds to sleep between metric capture/reporting.
*/
public GemfireMetricsMonitor(int intervalSeconds) {
super();
this.setDaemon(true); // Don't prevent the process from ending. The JVM exits when the only threads running are all daemon threads.
this.intervalMillis = intervalSeconds * 1000;
}When a client crashes, restart it as quickly as possible in the usual way.
How to check if the index of the region been used, when reading the value using key through Java API? is index only applicable for OQL?product_name variable: Geode, Pivotal GemFire, Tanzu GemFire, VMware GemFire, R, …apply plugin: 'signing'
apply plugin: 'java'
apply plugin: 'maven-publish'
sourceCompatibility = 1.8
version = '1.2.1-SNAPSHOT'
allprojects {
tasks.withType(Javadoc) {
options.addStringOption('Xdoclint:none', '-quiet')
}
}
group = 'io.pivotal.services.dataTx'
archivesBaseName = 'gemfire-extensions-core'
ext {
gemFireVersion = '10.0.0'
}
compileJava {
sourceCompatibility = '11'
targetCompatibility = '11'
}
task myJavadocs(type: Javadoc) {
source = sourceSets.main.allJava
}
task javadocJar(type: Jar) {
archiveClassifier.set("javadoc")
from javadoc
}
task sourcesJar(type: Jar) {
archiveClassifier.set("sources")
from sourceSets.main.allSource
}
artifacts { archives sourcesJar, javadocJar }
signing {
sign(publishing.publications)
}
jar {
manifest {
attributes(
'Implementation-Title': 'gemfire-extensions-core',
'Can-Redefine-Classes': false,
'Can-Set-Native-Method-Prefix': false
)
}
}
repositories {
mavenCentral()
mavenLocal()
maven {
url "https://commercial-repo.pivotal.io/data3/gemfire-release-repo/gemfire"
metadataSources {
mavenPom()
artifact()
ignoreGradleMetadataRedirection()
}
credentials {
username repoUsername
password repoPassword
}
}
}
dependencies {
implementation group: 'com.vmware.gemfire', name: 'gemfire-core', version: gemFireVersion
// implementation 'org.apache.logging.log4j:log4j-core:2.18.0'
// implementation 'org.apache.logging.log4j:log4j-api:2.18.0'
// implementation 'org.apache.shiro:shiro-core:1.9.1'
// implementation 'commons-beanutils:commons-beanutils:1.9.4'
// implementation group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.13.4'
// implementation group: 'com.fasterxml.jackson.core', name: 'jackson-annotations', version: '2.13.4'
implementation group: 'com.vmware.gemfire', name: 'gemfire-common', version: gemFireVersion
implementation group: 'com.vmware.gemfire', name: 'gemfire-lucene', version: gemFireVersion
implementation group: 'com.vmware.gemfire', name: 'gemfire-cq', version: gemFireVersion
implementation group: 'com.vmware.gemfire', name: 'gemfire-wan', version: gemFireVersion
implementation 'org.junit.jupiter:junit-jupiter:5.9.0'
implementation group: 'com.zaxxer', name: 'HikariCP', version: '4.0.3'
implementation group: 'com.github.nyla-solutions', name: 'nyla.solutions.core', version: '1.5.1'
testImplementation group: 'org.mockito', name: 'mockito-junit-jupiter', version: '4.6.1'
testImplementation group: 'org.mockito', name: 'mockito-core', version: '4.6.1'
testImplementation group: 'org.junit.jupiter', name: 'junit-jupiter', version: '5.9.0'
testImplementation group: 'org.junit.jupiter', name: 'junit-jupiter-engine', version: '5.9.0'
testImplementation group: 'com.h2database', name: 'h2', version: '2.1.214'
testImplementation group: 'org.postgresql', name: 'postgresql', version: '42.2.9'
testImplementation("org.junit.jupiter:junit-jupiter-api:5.9.0")
}
test {
// Enable JUnit 5 (Gradle 4.6+).
useJUnitPlatform()
// Always run tests, even when nothing changed.
dependsOn 'cleanTest'
// Show test results.
testLogging {
events "passed", "skipped", "failed"
}
}
task sourceJar(type: Jar) {
classifier "sources"
from sourceSets.main.allJava
}
publishing {
publications {
maven(MavenPublication) {
pom {
name = 'gemfire-extensions-core'
groupId = group
artifactId = 'gemfire-extensions-core'
description = 'This Java API provides support for GemFire'
packaging = 'jar'
url = 'https://github.com/ggreen/gemfire-extensions'
licenses {
license {
url = 'https://github.com/ggreen/gemfire-extensions/blob/main/LICENSE'
}
}
developers {
developer {
id = 'ggreen'
name = 'Gregory Green'
email = 'gregoryg@vmware.com'
}
}
scm {
connection = 'scm:git:https://github.com/ggreen/gemfire-extensions.git'
developerConnection = 'scm:git:https://github.com/ggreen/gemfire-extensions.git'
url = 'https://github.com/ggreen/gemfire-extensions.git'
}
}
from components.java
artifact sourcesJar
artifact javadocJar
}
}
repositories {
maven {
name = "CentralMaven" // optional target repository name
url = "https://oss.sonatype.org/service/local/staging/deploy/maven2/"
credentials {
username = ossrhUsername
password = ossrhPassword
}
}
}
}root@ubuntu:~# kubectl -n tanzu-gemfire get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE contour-gemfire-gateway ClusterIP 10.96.139.243 <none> 8001/TCP 21h envoy-gemfire-gateway LoadBalancer 10.96.63.235 172.18.255.204 9000:32725/TCP 21h gemfire-cluster-locator ClusterIP None <none> 10334/TCP,7070/TCP,4321/TCP 20h gemfire-cluster-locator-0 ClusterIP None <none> 10334/TCP,7070/TCP,4321/TCP 20h gemfire-cluster-server ClusterIP None <none> 40404/TCP,7070/TCP,4321/TCP 20h gemfire-cluster-server-0 ClusterIP None <none> 40404/TCP,7070/TCP,4321/TCP 20h gemfire-cluster-server-1 ClusterIP None <none> 40404/TCP,7070/TCP,4321/TCP 20h gemfire-operator-webhook-service ClusterIP 10.96.17.31 <none> 443/TCP 21h load-balancer-dev-api LoadBalancer 10.96.123.149 172.18.255.201 7070:31133/TCP 183d load-balancer-mgmt LoadBalancer 10.96.212.38 172.18.255.200 7070:30612/TCP 183d
application.properties in my SpringBoot app (I have copied certs, using TLS):spring.data.gemfire.pool.locators=gemfire-cluster-locator-0.tanzu-gemfire.svc.cluster.local[10334] service-gateway.hostname=172.18.255.204 service-gateway.port=9000 gemfire.ssl-enabled-components=all gemfire.ssl-endpoint-identification-enabled=true gemfire.ssl-truststore=/home/dmitry/code/dmitrynovik/twitter-wordcloud/gemfire-certs/truststore.p12 gemfire.ssl-truststore-password=35G2cF9movsfwIVeM8njYZwo_yE_NFx3xaSeuT2JyA4= gemfire.ssl-keystore=/home/dmitry/code/dmitrynovik/twitter-wordcloud/gemfire-certs/keystore.p12 gemfire.ssl-keystore-password=35G2cF9movsfwIVeM8njYZwo_yE_NFx3xaSeuT2JyA4=
org.apache.geode.cache.client.NoAvailableLocatorsException: Unable to connect to any locators in the list [HostAndPort[gemfire-cluster-locator-0.tanzu-gemfire.svc.cluster.local:10334]]
gfsh :gfsh>connect --locator=172.18.255.204[9000] --trust-store=/home/dmitry/code/dmitrynovik/twitter-wordcloud/gemfire-certs/truststore.p12 key-store: /home/dmitry/code/dmitrynovik/twitter-wordcloud/gemfire-certs/keystore.p12 key-store-password: ******************************************** key-store-type(default: JKS): trust-store-password: ******************************************** trust-store-type(default: JKS): ssl-ciphers(default: any): ssl-protocols(default: any): ssl-enabled-components(default: all): Connecting to Locator at [host=172.18.255.204, port=9000] .. Connection reset
feature/gf-on-k8s), I'd appreciate any help connecting my outside SpringBoot app to GF on K8s, thanks. (edited) connect --locator=hello-world-gemfire-cluster-locator-0.hello-world-gemfire-cluster-locator.gemfire-cluster.svc.cluster.local[10334] --security-properties-file=/security/gfsecurity.properties
[GEMFIRE-CLUSTER-NAME]-locator-[LOCATOR-NUMBER].[GEMFIRE-CLUSTER-NAME]-locator.[NAMESPACE-NAME].svc.cluster.local
public void callbackDelete(EntryEvent event) throws IOException
{
RoutingKey key=(RoutingKey)event.getKey();
logConfig("Key : " + key);
Key1 = key.getKey1();
Key2 = key.getKey2();
Key3 = key.getKey3();
logConfig("Key1 :" +Key1);
logConfig("Key2 :" +Key2);
logConfig("Key3 :" +Key3);
logConfig("Region Name : " + event.getRegion().getName());
logConfig("Region Value : " + event.getRegion().toString());
json = (String) event.getOldValue();
logConfig("JSON Data:"+ json);
Random randomGenerator = new Random();
boolean trace = randomGenerator.nextInt(100) < 2;
RequestBuilder requestBuilder=null;
CloseableHttpResponse response = null;
long endTime = 0L;
long startTime = 0L;
json = (String) event.getOldValue();
logConfig("JSON Data:"+ json);gfsh>create index --name=firstName_MyCustomer --expression=firstName --region=/MyCustomerPdx Region MyCustomerPdx does not exist.
--member parameter and it created it successfully.gfsh>create index --name=firstNameIndex --expression=firstName --region=/MyCustomerPdx --member=server1
Configuration change is not persisted because the command is executed on specific member.
Member | Status | Message
------------------------------------ | ------ | --------------------------
192.168.67.9(server1:7053)<v2>:58821 | OK | Index successfully created